00:00:00.001 Started by upstream project "autotest-per-patch" build number 132371 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.035 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.036 The recommended git tool is: git 00:00:00.036 using credential 00000000-0000-0000-0000-000000000002 00:00:00.039 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.054 Fetching changes from the remote Git repository 00:00:00.056 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.073 Using shallow fetch with depth 1 00:00:00.073 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.073 > git --version # timeout=10 00:00:00.090 > git --version # 'git version 2.39.2' 00:00:00.090 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.103 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.103 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.841 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.853 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.867 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:06.867 > git config core.sparsecheckout # timeout=10 00:00:06.878 > git read-tree -mu HEAD # timeout=10 00:00:06.896 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:06.920 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:06.920 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.003 [Pipeline] Start of Pipeline 00:00:07.015 [Pipeline] library 00:00:07.016 Loading library shm_lib@master 00:00:07.016 Library shm_lib@master is cached. Copying from home. 00:00:07.034 [Pipeline] node 00:00:07.048 Running on WFP6 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:07.050 [Pipeline] { 00:00:07.059 [Pipeline] catchError 00:00:07.061 [Pipeline] { 00:00:07.071 [Pipeline] wrap 00:00:07.078 [Pipeline] { 00:00:07.084 [Pipeline] stage 00:00:07.086 [Pipeline] { (Prologue) 00:00:07.308 [Pipeline] sh 00:00:07.586 + logger -p user.info -t JENKINS-CI 00:00:07.603 [Pipeline] echo 00:00:07.604 Node: WFP6 00:00:07.611 [Pipeline] sh 00:00:07.906 [Pipeline] setCustomBuildProperty 00:00:07.917 [Pipeline] echo 00:00:07.919 Cleanup processes 00:00:07.924 [Pipeline] sh 00:00:08.207 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.207 2950032 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.224 [Pipeline] sh 00:00:08.508 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.508 ++ grep -v 'sudo pgrep' 00:00:08.508 ++ awk '{print $1}' 00:00:08.508 + sudo kill -9 00:00:08.508 + true 00:00:08.526 [Pipeline] cleanWs 00:00:08.537 [WS-CLEANUP] Deleting project workspace... 00:00:08.537 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.544 [WS-CLEANUP] done 00:00:08.548 [Pipeline] setCustomBuildProperty 00:00:08.567 [Pipeline] sh 00:00:08.850 + sudo git config --global --replace-all safe.directory '*' 00:00:08.952 [Pipeline] httpRequest 00:00:09.317 [Pipeline] echo 00:00:09.318 Sorcerer 10.211.164.20 is alive 00:00:09.325 [Pipeline] retry 00:00:09.327 [Pipeline] { 00:00:09.338 [Pipeline] httpRequest 00:00:09.343 HttpMethod: GET 00:00:09.344 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.344 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:09.367 Response Code: HTTP/1.1 200 OK 00:00:09.367 Success: Status code 200 is in the accepted range: 200,404 00:00:09.367 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:29.856 [Pipeline] } 00:00:29.873 [Pipeline] // retry 00:00:29.881 [Pipeline] sh 00:00:30.165 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:30.182 [Pipeline] httpRequest 00:00:30.474 [Pipeline] echo 00:00:30.476 Sorcerer 10.211.164.20 is alive 00:00:30.486 [Pipeline] retry 00:00:30.488 [Pipeline] { 00:00:30.503 [Pipeline] httpRequest 00:00:30.507 HttpMethod: GET 00:00:30.508 URL: http://10.211.164.20/packages/spdk_097badaebc5925d7299eba66d2899808afbab0b1.tar.gz 00:00:30.508 Sending request to url: http://10.211.164.20/packages/spdk_097badaebc5925d7299eba66d2899808afbab0b1.tar.gz 00:00:30.528 Response Code: HTTP/1.1 200 OK 00:00:30.529 Success: Status code 200 is in the accepted range: 200,404 00:00:30.529 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_097badaebc5925d7299eba66d2899808afbab0b1.tar.gz 00:01:39.985 [Pipeline] } 00:01:40.003 [Pipeline] // retry 00:01:40.010 [Pipeline] sh 00:01:40.294 + tar --no-same-owner -xf spdk_097badaebc5925d7299eba66d2899808afbab0b1.tar.gz 00:01:42.842 [Pipeline] sh 00:01:43.125 + git -C spdk log --oneline -n5 00:01:43.125 097badaeb test/nvmf: Solve ambiguity around $NVMF_SECOND_TARGET_IP 00:01:43.125 2741dd1ac test/nvmf: Don't pin nvmf_bdevperf and nvmf_target_disconnect to phy 00:01:43.125 4f0cbdcd1 test/nvmf: Remove all transport conditions from the test suites 00:01:43.125 097b7c969 test/nvmf: Drop $RDMA_IP_LIST 00:01:43.126 400f484f7 test/nvmf: Drop $NVMF_INITIATOR_IP in favor of $NVMF_FIRST_INITIATOR_IP 00:01:43.137 [Pipeline] } 00:01:43.150 [Pipeline] // stage 00:01:43.160 [Pipeline] stage 00:01:43.162 [Pipeline] { (Prepare) 00:01:43.178 [Pipeline] writeFile 00:01:43.194 [Pipeline] sh 00:01:43.474 + logger -p user.info -t JENKINS-CI 00:01:43.488 [Pipeline] sh 00:01:43.772 + logger -p user.info -t JENKINS-CI 00:01:43.782 [Pipeline] sh 00:01:44.064 + cat autorun-spdk.conf 00:01:44.064 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.064 SPDK_TEST_NVMF=1 00:01:44.064 SPDK_TEST_NVME_CLI=1 00:01:44.064 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:44.064 SPDK_TEST_NVMF_NICS=e810 00:01:44.064 SPDK_TEST_VFIOUSER=1 00:01:44.064 SPDK_RUN_UBSAN=1 00:01:44.064 NET_TYPE=phy 00:01:44.072 RUN_NIGHTLY=0 00:01:44.076 [Pipeline] readFile 00:01:44.105 [Pipeline] withEnv 00:01:44.108 [Pipeline] { 00:01:44.119 [Pipeline] sh 00:01:44.400 + set -ex 00:01:44.400 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:44.400 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:44.400 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:44.400 ++ SPDK_TEST_NVMF=1 00:01:44.400 ++ SPDK_TEST_NVME_CLI=1 00:01:44.400 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:44.400 ++ SPDK_TEST_NVMF_NICS=e810 00:01:44.400 ++ SPDK_TEST_VFIOUSER=1 00:01:44.400 ++ SPDK_RUN_UBSAN=1 00:01:44.400 ++ NET_TYPE=phy 00:01:44.400 ++ RUN_NIGHTLY=0 00:01:44.400 + case $SPDK_TEST_NVMF_NICS in 00:01:44.400 + DRIVERS=ice 00:01:44.400 + [[ tcp == \r\d\m\a ]] 00:01:44.400 + [[ -n ice ]] 00:01:44.400 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:44.400 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:44.400 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:44.400 rmmod: ERROR: Module irdma is not currently loaded 00:01:44.400 rmmod: ERROR: Module i40iw is not currently loaded 00:01:44.400 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:44.400 + true 00:01:44.400 + for D in $DRIVERS 00:01:44.400 + sudo modprobe ice 00:01:44.400 + exit 0 00:01:44.410 [Pipeline] } 00:01:44.426 [Pipeline] // withEnv 00:01:44.433 [Pipeline] } 00:01:44.449 [Pipeline] // stage 00:01:44.458 [Pipeline] catchError 00:01:44.460 [Pipeline] { 00:01:44.476 [Pipeline] timeout 00:01:44.476 Timeout set to expire in 1 hr 0 min 00:01:44.478 [Pipeline] { 00:01:44.495 [Pipeline] stage 00:01:44.498 [Pipeline] { (Tests) 00:01:44.514 [Pipeline] sh 00:01:44.803 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.803 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.803 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.803 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:44.803 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:44.803 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:44.803 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:44.803 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:44.803 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:44.803 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:44.803 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:44.803 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:44.803 + source /etc/os-release 00:01:44.803 ++ NAME='Fedora Linux' 00:01:44.803 ++ VERSION='39 (Cloud Edition)' 00:01:44.803 ++ ID=fedora 00:01:44.803 ++ VERSION_ID=39 00:01:44.803 ++ VERSION_CODENAME= 00:01:44.803 ++ PLATFORM_ID=platform:f39 00:01:44.803 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:44.803 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:44.803 ++ LOGO=fedora-logo-icon 00:01:44.803 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:44.803 ++ HOME_URL=https://fedoraproject.org/ 00:01:44.803 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:44.803 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:44.803 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:44.803 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:44.803 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:44.803 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:44.803 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:44.803 ++ SUPPORT_END=2024-11-12 00:01:44.803 ++ VARIANT='Cloud Edition' 00:01:44.803 ++ VARIANT_ID=cloud 00:01:44.803 + uname -a 00:01:44.803 Linux spdk-wfp-06 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:44.803 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:47.342 Hugepages 00:01:47.342 node hugesize free / total 00:01:47.342 node0 1048576kB 0 / 0 00:01:47.342 node0 2048kB 0 / 0 00:01:47.342 node1 1048576kB 0 / 0 00:01:47.342 node1 2048kB 0 / 0 00:01:47.342 00:01:47.342 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:47.342 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:47.342 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:47.342 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:47.342 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:47.342 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:47.342 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:47.342 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:47.342 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:47.342 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:47.342 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:47.342 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:47.342 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:47.342 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:47.342 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:47.342 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:47.342 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:47.342 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:47.342 + rm -f /tmp/spdk-ld-path 00:01:47.342 + source autorun-spdk.conf 00:01:47.342 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.342 ++ SPDK_TEST_NVMF=1 00:01:47.342 ++ SPDK_TEST_NVME_CLI=1 00:01:47.342 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.342 ++ SPDK_TEST_NVMF_NICS=e810 00:01:47.342 ++ SPDK_TEST_VFIOUSER=1 00:01:47.342 ++ SPDK_RUN_UBSAN=1 00:01:47.342 ++ NET_TYPE=phy 00:01:47.342 ++ RUN_NIGHTLY=0 00:01:47.342 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:47.342 + [[ -n '' ]] 00:01:47.342 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:47.342 + for M in /var/spdk/build-*-manifest.txt 00:01:47.342 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:47.342 + cp /var/spdk/build-kernel-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:47.342 + for M in /var/spdk/build-*-manifest.txt 00:01:47.342 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:47.342 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:47.342 + for M in /var/spdk/build-*-manifest.txt 00:01:47.342 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:47.342 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:47.342 ++ uname 00:01:47.342 + [[ Linux == \L\i\n\u\x ]] 00:01:47.342 + sudo dmesg -T 00:01:47.602 + sudo dmesg --clear 00:01:47.602 + dmesg_pid=2951489 00:01:47.602 + [[ Fedora Linux == FreeBSD ]] 00:01:47.602 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.602 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:47.602 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:47.602 + [[ -x /usr/src/fio-static/fio ]] 00:01:47.602 + export FIO_BIN=/usr/src/fio-static/fio 00:01:47.602 + FIO_BIN=/usr/src/fio-static/fio 00:01:47.602 + sudo dmesg -Tw 00:01:47.602 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:47.602 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:47.602 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:47.602 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.602 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:47.602 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:47.602 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.602 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:47.602 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:47.602 10:19:28 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:47.602 10:19:28 -- spdk/autorun.sh@20 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:47.602 10:19:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:47.602 10:19:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@2 -- $ SPDK_TEST_NVMF=1 00:01:47.602 10:19:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@3 -- $ SPDK_TEST_NVME_CLI=1 00:01:47.602 10:19:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@4 -- $ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:47.602 10:19:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@5 -- $ SPDK_TEST_NVMF_NICS=e810 00:01:47.602 10:19:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@6 -- $ SPDK_TEST_VFIOUSER=1 00:01:47.602 10:19:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@7 -- $ SPDK_RUN_UBSAN=1 00:01:47.602 10:19:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@8 -- $ NET_TYPE=phy 00:01:47.602 10:19:28 -- nvmf-tcp-phy-autotest/autorun-spdk.conf@9 -- $ RUN_NIGHTLY=0 00:01:47.602 10:19:28 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:47.602 10:19:28 -- spdk/autorun.sh@25 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autobuild.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:47.602 10:19:28 -- common/autotest_common.sh@1692 -- $ [[ n == y ]] 00:01:47.602 10:19:28 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:47.602 10:19:28 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:47.602 10:19:28 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:47.602 10:19:28 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:47.602 10:19:28 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:47.602 10:19:28 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.602 10:19:28 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.602 10:19:28 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.602 10:19:28 -- paths/export.sh@5 -- $ export PATH 00:01:47.602 10:19:28 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:47.602 10:19:28 -- common/autobuild_common.sh@492 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:47.602 10:19:28 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:47.602 10:19:28 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1732094368.XXXXXX 00:01:47.602 10:19:28 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1732094368.cjDbIR 00:01:47.602 10:19:28 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:47.602 10:19:28 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:47.602 10:19:28 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:47.602 10:19:28 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:47.602 10:19:28 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:47.602 10:19:28 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:47.602 10:19:28 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:47.602 10:19:28 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.602 10:19:28 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:47.602 10:19:28 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:47.602 10:19:28 -- pm/common@17 -- $ local monitor 00:01:47.602 10:19:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.602 10:19:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.602 10:19:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.602 10:19:28 -- pm/common@21 -- $ date +%s 00:01:47.602 10:19:28 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:47.602 10:19:28 -- pm/common@21 -- $ date +%s 00:01:47.602 10:19:28 -- pm/common@25 -- $ sleep 1 00:01:47.603 10:19:28 -- pm/common@21 -- $ date +%s 00:01:47.603 10:19:28 -- pm/common@21 -- $ date +%s 00:01:47.603 10:19:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732094368 00:01:47.603 10:19:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732094368 00:01:47.603 10:19:28 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732094368 00:01:47.603 10:19:28 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1732094368 00:01:47.862 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732094368_collect-cpu-load.pm.log 00:01:47.862 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732094368_collect-vmstat.pm.log 00:01:47.862 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732094368_collect-cpu-temp.pm.log 00:01:47.862 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1732094368_collect-bmc-pm.bmc.pm.log 00:01:48.800 10:19:29 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:48.800 10:19:29 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:48.800 10:19:29 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:48.800 10:19:29 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:48.800 10:19:29 -- spdk/autobuild.sh@16 -- $ date -u 00:01:48.800 Wed Nov 20 09:19:29 AM UTC 2024 00:01:48.800 10:19:29 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:48.800 v25.01-pre-206-g097badaeb 00:01:48.800 10:19:29 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:48.800 10:19:29 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:48.800 10:19:29 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:48.800 10:19:29 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:48.800 10:19:29 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:48.800 10:19:29 -- common/autotest_common.sh@10 -- $ set +x 00:01:48.800 ************************************ 00:01:48.800 START TEST ubsan 00:01:48.800 ************************************ 00:01:48.800 10:19:29 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:48.800 using ubsan 00:01:48.800 00:01:48.800 real 0m0.000s 00:01:48.800 user 0m0.000s 00:01:48.800 sys 0m0.000s 00:01:48.800 10:19:29 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:48.800 10:19:29 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:48.800 ************************************ 00:01:48.800 END TEST ubsan 00:01:48.800 ************************************ 00:01:48.800 10:19:29 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:48.800 10:19:29 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:48.800 10:19:29 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:48.800 10:19:29 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:48.800 10:19:29 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:48.800 10:19:29 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:48.801 10:19:29 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:48.801 10:19:29 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:48.801 10:19:29 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:49.059 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:49.059 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:49.317 Using 'verbs' RDMA provider 00:02:02.471 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:02:14.735 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:02:14.735 Creating mk/config.mk...done. 00:02:14.735 Creating mk/cc.flags.mk...done. 00:02:14.735 Type 'make' to build. 00:02:14.735 10:19:54 -- spdk/autobuild.sh@70 -- $ run_test make make -j96 00:02:14.735 10:19:54 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:14.735 10:19:54 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:14.735 10:19:54 -- common/autotest_common.sh@10 -- $ set +x 00:02:14.735 ************************************ 00:02:14.735 START TEST make 00:02:14.735 ************************************ 00:02:14.735 10:19:54 make -- common/autotest_common.sh@1129 -- $ make -j96 00:02:14.735 make[1]: Nothing to be done for 'all'. 00:02:16.121 The Meson build system 00:02:16.121 Version: 1.5.0 00:02:16.121 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:02:16.121 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:16.121 Build type: native build 00:02:16.121 Project name: libvfio-user 00:02:16.121 Project version: 0.0.1 00:02:16.121 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:16.121 C linker for the host machine: cc ld.bfd 2.40-14 00:02:16.121 Host machine cpu family: x86_64 00:02:16.121 Host machine cpu: x86_64 00:02:16.121 Run-time dependency threads found: YES 00:02:16.121 Library dl found: YES 00:02:16.121 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:16.121 Run-time dependency json-c found: YES 0.17 00:02:16.121 Run-time dependency cmocka found: YES 1.1.7 00:02:16.121 Program pytest-3 found: NO 00:02:16.121 Program flake8 found: NO 00:02:16.121 Program misspell-fixer found: NO 00:02:16.121 Program restructuredtext-lint found: NO 00:02:16.121 Program valgrind found: YES (/usr/bin/valgrind) 00:02:16.121 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:16.121 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:16.121 Compiler for C supports arguments -Wwrite-strings: YES 00:02:16.121 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:16.121 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:02:16.121 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:02:16.121 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:02:16.121 Build targets in project: 8 00:02:16.121 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:02:16.121 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:02:16.121 00:02:16.121 libvfio-user 0.0.1 00:02:16.121 00:02:16.121 User defined options 00:02:16.121 buildtype : debug 00:02:16.121 default_library: shared 00:02:16.121 libdir : /usr/local/lib 00:02:16.121 00:02:16.121 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:16.687 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:16.687 [1/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:02:16.687 [2/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:02:16.687 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:02:16.687 [4/37] Compiling C object samples/null.p/null.c.o 00:02:16.687 [5/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:02:16.687 [6/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:02:16.687 [7/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:02:16.687 [8/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:02:16.687 [9/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:02:16.687 [10/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:02:16.687 [11/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:02:16.687 [12/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:02:16.687 [13/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:02:16.687 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:02:16.687 [15/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:02:16.687 [16/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:02:16.687 [17/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:02:16.687 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:02:16.687 [19/37] Compiling C object test/unit_tests.p/mocks.c.o 00:02:16.687 [20/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:02:16.687 [21/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:02:16.687 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:02:16.687 [23/37] Compiling C object samples/client.p/client.c.o 00:02:16.687 [24/37] Compiling C object samples/server.p/server.c.o 00:02:16.687 [25/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:02:16.687 [26/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:02:16.946 [27/37] Linking target samples/client 00:02:16.946 [28/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:02:16.946 [29/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:02:16.946 [30/37] Linking target test/unit_tests 00:02:16.946 [31/37] Linking target lib/libvfio-user.so.0.0.1 00:02:16.946 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:02:17.206 [33/37] Linking target samples/server 00:02:17.206 [34/37] Linking target samples/null 00:02:17.206 [35/37] Linking target samples/shadow_ioeventfd_server 00:02:17.206 [36/37] Linking target samples/lspci 00:02:17.206 [37/37] Linking target samples/gpio-pci-idio-16 00:02:17.206 INFO: autodetecting backend as ninja 00:02:17.206 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:17.206 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:02:17.465 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:02:17.465 ninja: no work to do. 00:02:22.738 The Meson build system 00:02:22.738 Version: 1.5.0 00:02:22.738 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:02:22.738 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:02:22.738 Build type: native build 00:02:22.738 Program cat found: YES (/usr/bin/cat) 00:02:22.738 Project name: DPDK 00:02:22.738 Project version: 24.03.0 00:02:22.738 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:22.738 C linker for the host machine: cc ld.bfd 2.40-14 00:02:22.738 Host machine cpu family: x86_64 00:02:22.738 Host machine cpu: x86_64 00:02:22.738 Message: ## Building in Developer Mode ## 00:02:22.738 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:22.738 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:02:22.738 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:22.738 Program python3 found: YES (/usr/bin/python3) 00:02:22.738 Program cat found: YES (/usr/bin/cat) 00:02:22.738 Compiler for C supports arguments -march=native: YES 00:02:22.738 Checking for size of "void *" : 8 00:02:22.738 Checking for size of "void *" : 8 (cached) 00:02:22.738 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:22.738 Library m found: YES 00:02:22.738 Library numa found: YES 00:02:22.739 Has header "numaif.h" : YES 00:02:22.739 Library fdt found: NO 00:02:22.739 Library execinfo found: NO 00:02:22.739 Has header "execinfo.h" : YES 00:02:22.739 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:22.739 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:22.739 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:22.739 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:22.739 Run-time dependency openssl found: YES 3.1.1 00:02:22.739 Run-time dependency libpcap found: YES 1.10.4 00:02:22.739 Has header "pcap.h" with dependency libpcap: YES 00:02:22.739 Compiler for C supports arguments -Wcast-qual: YES 00:02:22.739 Compiler for C supports arguments -Wdeprecated: YES 00:02:22.739 Compiler for C supports arguments -Wformat: YES 00:02:22.739 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:22.739 Compiler for C supports arguments -Wformat-security: NO 00:02:22.739 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:22.739 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:22.739 Compiler for C supports arguments -Wnested-externs: YES 00:02:22.739 Compiler for C supports arguments -Wold-style-definition: YES 00:02:22.739 Compiler for C supports arguments -Wpointer-arith: YES 00:02:22.739 Compiler for C supports arguments -Wsign-compare: YES 00:02:22.739 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:22.739 Compiler for C supports arguments -Wundef: YES 00:02:22.739 Compiler for C supports arguments -Wwrite-strings: YES 00:02:22.739 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:22.739 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:22.739 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:22.739 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:22.739 Program objdump found: YES (/usr/bin/objdump) 00:02:22.739 Compiler for C supports arguments -mavx512f: YES 00:02:22.739 Checking if "AVX512 checking" compiles: YES 00:02:22.739 Fetching value of define "__SSE4_2__" : 1 00:02:22.739 Fetching value of define "__AES__" : 1 00:02:22.739 Fetching value of define "__AVX__" : 1 00:02:22.739 Fetching value of define "__AVX2__" : 1 00:02:22.739 Fetching value of define "__AVX512BW__" : 1 00:02:22.739 Fetching value of define "__AVX512CD__" : 1 00:02:22.739 Fetching value of define "__AVX512DQ__" : 1 00:02:22.739 Fetching value of define "__AVX512F__" : 1 00:02:22.739 Fetching value of define "__AVX512VL__" : 1 00:02:22.739 Fetching value of define "__PCLMUL__" : 1 00:02:22.739 Fetching value of define "__RDRND__" : 1 00:02:22.739 Fetching value of define "__RDSEED__" : 1 00:02:22.739 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:02:22.739 Fetching value of define "__znver1__" : (undefined) 00:02:22.739 Fetching value of define "__znver2__" : (undefined) 00:02:22.739 Fetching value of define "__znver3__" : (undefined) 00:02:22.739 Fetching value of define "__znver4__" : (undefined) 00:02:22.739 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:22.739 Message: lib/log: Defining dependency "log" 00:02:22.739 Message: lib/kvargs: Defining dependency "kvargs" 00:02:22.739 Message: lib/telemetry: Defining dependency "telemetry" 00:02:22.739 Checking for function "getentropy" : NO 00:02:22.739 Message: lib/eal: Defining dependency "eal" 00:02:22.739 Message: lib/ring: Defining dependency "ring" 00:02:22.739 Message: lib/rcu: Defining dependency "rcu" 00:02:22.739 Message: lib/mempool: Defining dependency "mempool" 00:02:22.739 Message: lib/mbuf: Defining dependency "mbuf" 00:02:22.739 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:22.739 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:22.739 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:22.739 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:22.739 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:22.739 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:02:22.739 Compiler for C supports arguments -mpclmul: YES 00:02:22.739 Compiler for C supports arguments -maes: YES 00:02:22.739 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:22.739 Compiler for C supports arguments -mavx512bw: YES 00:02:22.739 Compiler for C supports arguments -mavx512dq: YES 00:02:22.739 Compiler for C supports arguments -mavx512vl: YES 00:02:22.739 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:22.739 Compiler for C supports arguments -mavx2: YES 00:02:22.739 Compiler for C supports arguments -mavx: YES 00:02:22.739 Message: lib/net: Defining dependency "net" 00:02:22.739 Message: lib/meter: Defining dependency "meter" 00:02:22.739 Message: lib/ethdev: Defining dependency "ethdev" 00:02:22.739 Message: lib/pci: Defining dependency "pci" 00:02:22.739 Message: lib/cmdline: Defining dependency "cmdline" 00:02:22.739 Message: lib/hash: Defining dependency "hash" 00:02:22.739 Message: lib/timer: Defining dependency "timer" 00:02:22.739 Message: lib/compressdev: Defining dependency "compressdev" 00:02:22.739 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:22.739 Message: lib/dmadev: Defining dependency "dmadev" 00:02:22.739 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:22.739 Message: lib/power: Defining dependency "power" 00:02:22.739 Message: lib/reorder: Defining dependency "reorder" 00:02:22.739 Message: lib/security: Defining dependency "security" 00:02:22.739 Has header "linux/userfaultfd.h" : YES 00:02:22.739 Has header "linux/vduse.h" : YES 00:02:22.739 Message: lib/vhost: Defining dependency "vhost" 00:02:22.739 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:22.739 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:22.739 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:22.739 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:22.739 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:22.739 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:22.739 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:22.739 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:22.739 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:22.739 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:22.739 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:22.739 Configuring doxy-api-html.conf using configuration 00:02:22.739 Configuring doxy-api-man.conf using configuration 00:02:22.739 Program mandb found: YES (/usr/bin/mandb) 00:02:22.739 Program sphinx-build found: NO 00:02:22.739 Configuring rte_build_config.h using configuration 00:02:22.739 Message: 00:02:22.739 ================= 00:02:22.739 Applications Enabled 00:02:22.739 ================= 00:02:22.739 00:02:22.739 apps: 00:02:22.739 00:02:22.739 00:02:22.739 Message: 00:02:22.739 ================= 00:02:22.739 Libraries Enabled 00:02:22.739 ================= 00:02:22.739 00:02:22.739 libs: 00:02:22.739 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:22.739 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:22.739 cryptodev, dmadev, power, reorder, security, vhost, 00:02:22.739 00:02:22.739 Message: 00:02:22.739 =============== 00:02:22.739 Drivers Enabled 00:02:22.739 =============== 00:02:22.739 00:02:22.739 common: 00:02:22.739 00:02:22.739 bus: 00:02:22.739 pci, vdev, 00:02:22.739 mempool: 00:02:22.739 ring, 00:02:22.739 dma: 00:02:22.739 00:02:22.739 net: 00:02:22.739 00:02:22.739 crypto: 00:02:22.739 00:02:22.739 compress: 00:02:22.739 00:02:22.739 vdpa: 00:02:22.739 00:02:22.739 00:02:22.739 Message: 00:02:22.739 ================= 00:02:22.739 Content Skipped 00:02:22.739 ================= 00:02:22.739 00:02:22.739 apps: 00:02:22.739 dumpcap: explicitly disabled via build config 00:02:22.739 graph: explicitly disabled via build config 00:02:22.739 pdump: explicitly disabled via build config 00:02:22.739 proc-info: explicitly disabled via build config 00:02:22.739 test-acl: explicitly disabled via build config 00:02:22.739 test-bbdev: explicitly disabled via build config 00:02:22.739 test-cmdline: explicitly disabled via build config 00:02:22.739 test-compress-perf: explicitly disabled via build config 00:02:22.739 test-crypto-perf: explicitly disabled via build config 00:02:22.739 test-dma-perf: explicitly disabled via build config 00:02:22.739 test-eventdev: explicitly disabled via build config 00:02:22.739 test-fib: explicitly disabled via build config 00:02:22.739 test-flow-perf: explicitly disabled via build config 00:02:22.739 test-gpudev: explicitly disabled via build config 00:02:22.739 test-mldev: explicitly disabled via build config 00:02:22.739 test-pipeline: explicitly disabled via build config 00:02:22.739 test-pmd: explicitly disabled via build config 00:02:22.739 test-regex: explicitly disabled via build config 00:02:22.739 test-sad: explicitly disabled via build config 00:02:22.739 test-security-perf: explicitly disabled via build config 00:02:22.739 00:02:22.739 libs: 00:02:22.739 argparse: explicitly disabled via build config 00:02:22.739 metrics: explicitly disabled via build config 00:02:22.739 acl: explicitly disabled via build config 00:02:22.739 bbdev: explicitly disabled via build config 00:02:22.739 bitratestats: explicitly disabled via build config 00:02:22.739 bpf: explicitly disabled via build config 00:02:22.739 cfgfile: explicitly disabled via build config 00:02:22.739 distributor: explicitly disabled via build config 00:02:22.739 efd: explicitly disabled via build config 00:02:22.739 eventdev: explicitly disabled via build config 00:02:22.739 dispatcher: explicitly disabled via build config 00:02:22.739 gpudev: explicitly disabled via build config 00:02:22.739 gro: explicitly disabled via build config 00:02:22.739 gso: explicitly disabled via build config 00:02:22.739 ip_frag: explicitly disabled via build config 00:02:22.739 jobstats: explicitly disabled via build config 00:02:22.739 latencystats: explicitly disabled via build config 00:02:22.739 lpm: explicitly disabled via build config 00:02:22.739 member: explicitly disabled via build config 00:02:22.739 pcapng: explicitly disabled via build config 00:02:22.739 rawdev: explicitly disabled via build config 00:02:22.739 regexdev: explicitly disabled via build config 00:02:22.739 mldev: explicitly disabled via build config 00:02:22.739 rib: explicitly disabled via build config 00:02:22.739 sched: explicitly disabled via build config 00:02:22.739 stack: explicitly disabled via build config 00:02:22.739 ipsec: explicitly disabled via build config 00:02:22.739 pdcp: explicitly disabled via build config 00:02:22.739 fib: explicitly disabled via build config 00:02:22.739 port: explicitly disabled via build config 00:02:22.740 pdump: explicitly disabled via build config 00:02:22.740 table: explicitly disabled via build config 00:02:22.740 pipeline: explicitly disabled via build config 00:02:22.740 graph: explicitly disabled via build config 00:02:22.740 node: explicitly disabled via build config 00:02:22.740 00:02:22.740 drivers: 00:02:22.740 common/cpt: not in enabled drivers build config 00:02:22.740 common/dpaax: not in enabled drivers build config 00:02:22.740 common/iavf: not in enabled drivers build config 00:02:22.740 common/idpf: not in enabled drivers build config 00:02:22.740 common/ionic: not in enabled drivers build config 00:02:22.740 common/mvep: not in enabled drivers build config 00:02:22.740 common/octeontx: not in enabled drivers build config 00:02:22.740 bus/auxiliary: not in enabled drivers build config 00:02:22.740 bus/cdx: not in enabled drivers build config 00:02:22.740 bus/dpaa: not in enabled drivers build config 00:02:22.740 bus/fslmc: not in enabled drivers build config 00:02:22.740 bus/ifpga: not in enabled drivers build config 00:02:22.740 bus/platform: not in enabled drivers build config 00:02:22.740 bus/uacce: not in enabled drivers build config 00:02:22.740 bus/vmbus: not in enabled drivers build config 00:02:22.740 common/cnxk: not in enabled drivers build config 00:02:22.740 common/mlx5: not in enabled drivers build config 00:02:22.740 common/nfp: not in enabled drivers build config 00:02:22.740 common/nitrox: not in enabled drivers build config 00:02:22.740 common/qat: not in enabled drivers build config 00:02:22.740 common/sfc_efx: not in enabled drivers build config 00:02:22.740 mempool/bucket: not in enabled drivers build config 00:02:22.740 mempool/cnxk: not in enabled drivers build config 00:02:22.740 mempool/dpaa: not in enabled drivers build config 00:02:22.740 mempool/dpaa2: not in enabled drivers build config 00:02:22.740 mempool/octeontx: not in enabled drivers build config 00:02:22.740 mempool/stack: not in enabled drivers build config 00:02:22.740 dma/cnxk: not in enabled drivers build config 00:02:22.740 dma/dpaa: not in enabled drivers build config 00:02:22.740 dma/dpaa2: not in enabled drivers build config 00:02:22.740 dma/hisilicon: not in enabled drivers build config 00:02:22.740 dma/idxd: not in enabled drivers build config 00:02:22.740 dma/ioat: not in enabled drivers build config 00:02:22.740 dma/skeleton: not in enabled drivers build config 00:02:22.740 net/af_packet: not in enabled drivers build config 00:02:22.740 net/af_xdp: not in enabled drivers build config 00:02:22.740 net/ark: not in enabled drivers build config 00:02:22.740 net/atlantic: not in enabled drivers build config 00:02:22.740 net/avp: not in enabled drivers build config 00:02:22.740 net/axgbe: not in enabled drivers build config 00:02:22.740 net/bnx2x: not in enabled drivers build config 00:02:22.740 net/bnxt: not in enabled drivers build config 00:02:22.740 net/bonding: not in enabled drivers build config 00:02:22.740 net/cnxk: not in enabled drivers build config 00:02:22.740 net/cpfl: not in enabled drivers build config 00:02:22.740 net/cxgbe: not in enabled drivers build config 00:02:22.740 net/dpaa: not in enabled drivers build config 00:02:22.740 net/dpaa2: not in enabled drivers build config 00:02:22.740 net/e1000: not in enabled drivers build config 00:02:22.740 net/ena: not in enabled drivers build config 00:02:22.740 net/enetc: not in enabled drivers build config 00:02:22.740 net/enetfec: not in enabled drivers build config 00:02:22.740 net/enic: not in enabled drivers build config 00:02:22.740 net/failsafe: not in enabled drivers build config 00:02:22.740 net/fm10k: not in enabled drivers build config 00:02:22.740 net/gve: not in enabled drivers build config 00:02:22.740 net/hinic: not in enabled drivers build config 00:02:22.740 net/hns3: not in enabled drivers build config 00:02:22.740 net/i40e: not in enabled drivers build config 00:02:22.740 net/iavf: not in enabled drivers build config 00:02:22.740 net/ice: not in enabled drivers build config 00:02:22.740 net/idpf: not in enabled drivers build config 00:02:22.740 net/igc: not in enabled drivers build config 00:02:22.740 net/ionic: not in enabled drivers build config 00:02:22.740 net/ipn3ke: not in enabled drivers build config 00:02:22.740 net/ixgbe: not in enabled drivers build config 00:02:22.740 net/mana: not in enabled drivers build config 00:02:22.740 net/memif: not in enabled drivers build config 00:02:22.740 net/mlx4: not in enabled drivers build config 00:02:22.740 net/mlx5: not in enabled drivers build config 00:02:22.740 net/mvneta: not in enabled drivers build config 00:02:22.740 net/mvpp2: not in enabled drivers build config 00:02:22.740 net/netvsc: not in enabled drivers build config 00:02:22.740 net/nfb: not in enabled drivers build config 00:02:22.740 net/nfp: not in enabled drivers build config 00:02:22.740 net/ngbe: not in enabled drivers build config 00:02:22.740 net/null: not in enabled drivers build config 00:02:22.740 net/octeontx: not in enabled drivers build config 00:02:22.740 net/octeon_ep: not in enabled drivers build config 00:02:22.740 net/pcap: not in enabled drivers build config 00:02:22.740 net/pfe: not in enabled drivers build config 00:02:22.740 net/qede: not in enabled drivers build config 00:02:22.740 net/ring: not in enabled drivers build config 00:02:22.740 net/sfc: not in enabled drivers build config 00:02:22.740 net/softnic: not in enabled drivers build config 00:02:22.740 net/tap: not in enabled drivers build config 00:02:22.740 net/thunderx: not in enabled drivers build config 00:02:22.740 net/txgbe: not in enabled drivers build config 00:02:22.740 net/vdev_netvsc: not in enabled drivers build config 00:02:22.740 net/vhost: not in enabled drivers build config 00:02:22.740 net/virtio: not in enabled drivers build config 00:02:22.740 net/vmxnet3: not in enabled drivers build config 00:02:22.740 raw/*: missing internal dependency, "rawdev" 00:02:22.740 crypto/armv8: not in enabled drivers build config 00:02:22.740 crypto/bcmfs: not in enabled drivers build config 00:02:22.740 crypto/caam_jr: not in enabled drivers build config 00:02:22.740 crypto/ccp: not in enabled drivers build config 00:02:22.740 crypto/cnxk: not in enabled drivers build config 00:02:22.740 crypto/dpaa_sec: not in enabled drivers build config 00:02:22.740 crypto/dpaa2_sec: not in enabled drivers build config 00:02:22.740 crypto/ipsec_mb: not in enabled drivers build config 00:02:22.740 crypto/mlx5: not in enabled drivers build config 00:02:22.740 crypto/mvsam: not in enabled drivers build config 00:02:22.740 crypto/nitrox: not in enabled drivers build config 00:02:22.740 crypto/null: not in enabled drivers build config 00:02:22.740 crypto/octeontx: not in enabled drivers build config 00:02:22.740 crypto/openssl: not in enabled drivers build config 00:02:22.740 crypto/scheduler: not in enabled drivers build config 00:02:22.740 crypto/uadk: not in enabled drivers build config 00:02:22.740 crypto/virtio: not in enabled drivers build config 00:02:22.740 compress/isal: not in enabled drivers build config 00:02:22.740 compress/mlx5: not in enabled drivers build config 00:02:22.740 compress/nitrox: not in enabled drivers build config 00:02:22.740 compress/octeontx: not in enabled drivers build config 00:02:22.740 compress/zlib: not in enabled drivers build config 00:02:22.740 regex/*: missing internal dependency, "regexdev" 00:02:22.740 ml/*: missing internal dependency, "mldev" 00:02:22.740 vdpa/ifc: not in enabled drivers build config 00:02:22.740 vdpa/mlx5: not in enabled drivers build config 00:02:22.740 vdpa/nfp: not in enabled drivers build config 00:02:22.740 vdpa/sfc: not in enabled drivers build config 00:02:22.740 event/*: missing internal dependency, "eventdev" 00:02:22.740 baseband/*: missing internal dependency, "bbdev" 00:02:22.740 gpu/*: missing internal dependency, "gpudev" 00:02:22.740 00:02:22.740 00:02:22.999 Build targets in project: 85 00:02:22.999 00:02:22.999 DPDK 24.03.0 00:02:22.999 00:02:22.999 User defined options 00:02:22.999 buildtype : debug 00:02:22.999 default_library : shared 00:02:22.999 libdir : lib 00:02:22.999 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:02:22.999 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:22.999 c_link_args : 00:02:22.999 cpu_instruction_set: native 00:02:22.999 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:02:22.999 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:02:22.999 enable_docs : false 00:02:22.999 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:22.999 enable_kmods : false 00:02:22.999 max_lcores : 128 00:02:22.999 tests : false 00:02:22.999 00:02:22.999 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:23.573 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:02:23.573 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:23.573 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:23.573 [3/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:23.573 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:23.573 [5/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:23.573 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:23.573 [7/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:23.573 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:23.573 [9/268] Linking static target lib/librte_kvargs.a 00:02:23.573 [10/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:23.573 [11/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:23.573 [12/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:23.573 [13/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:23.573 [14/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:23.573 [15/268] Linking static target lib/librte_log.a 00:02:23.573 [16/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:23.573 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:23.573 [18/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:23.573 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:23.835 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:23.835 [21/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:23.835 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:23.835 [23/268] Linking static target lib/librte_pci.a 00:02:23.835 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:23.835 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:24.099 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:24.099 [27/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:24.099 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:24.099 [29/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:24.099 [30/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:24.099 [31/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:24.099 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:24.099 [33/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:24.099 [34/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:24.099 [35/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:24.099 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:24.099 [37/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:24.099 [38/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:24.099 [39/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:24.099 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:24.099 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:24.099 [42/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:24.099 [43/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:24.099 [44/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:24.099 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:24.099 [46/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:24.099 [47/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:24.099 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:24.099 [49/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:24.099 [50/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:24.099 [51/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:24.099 [52/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:24.099 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:24.099 [54/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:24.099 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:24.099 [56/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:24.099 [57/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:24.099 [58/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:24.099 [59/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:24.099 [60/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:24.099 [61/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:24.099 [62/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:24.099 [63/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:24.099 [64/268] Linking static target lib/librte_ring.a 00:02:24.099 [65/268] Linking static target lib/librte_meter.a 00:02:24.099 [66/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:24.099 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:24.099 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:24.099 [69/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:24.099 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:24.099 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:24.099 [72/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:24.099 [73/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:24.099 [74/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:24.099 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:24.099 [76/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:24.099 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:24.099 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:24.099 [79/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:24.100 [80/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:02:24.100 [81/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:24.100 [82/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:24.100 [83/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:24.100 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:24.100 [85/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:24.100 [86/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:02:24.100 [87/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:24.100 [88/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:24.100 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:24.100 [90/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:24.100 [91/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:24.100 [92/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:24.100 [93/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.100 [94/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.100 [95/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:24.358 [96/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:24.358 [97/268] Linking static target lib/librte_telemetry.a 00:02:24.358 [98/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:24.358 [99/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:24.358 [100/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:24.358 [101/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:24.358 [102/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:24.358 [103/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:24.358 [104/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:24.359 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:24.359 [106/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:24.359 [107/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:24.359 [108/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:24.359 [109/268] Linking static target lib/librte_mempool.a 00:02:24.359 [110/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:24.359 [111/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:24.359 [112/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:24.359 [113/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:24.359 [114/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:24.359 [115/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:24.359 [116/268] Linking static target lib/librte_net.a 00:02:24.359 [117/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:24.359 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:24.359 [119/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:24.359 [120/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:24.359 [121/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:24.359 [122/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:24.359 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:24.359 [124/268] Linking static target lib/librte_rcu.a 00:02:24.359 [125/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:24.359 [126/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:24.359 [127/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:24.359 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:24.359 [129/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:24.359 [130/268] Linking static target lib/librte_eal.a 00:02:24.359 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:24.359 [132/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:24.359 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:24.359 [134/268] Linking static target lib/librte_cmdline.a 00:02:24.359 [135/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.359 [136/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.359 [137/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.359 [138/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:24.617 [139/268] Linking target lib/librte_log.so.24.1 00:02:24.617 [140/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:24.617 [141/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:24.617 [142/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:24.617 [143/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:24.617 [144/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:24.617 [145/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:24.617 [146/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:24.618 [147/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.618 [148/268] Linking static target lib/librte_mbuf.a 00:02:24.618 [149/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:24.618 [150/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:24.618 [151/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.618 [152/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:24.618 [153/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:24.618 [154/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:24.618 [155/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:24.618 [156/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:24.618 [157/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:24.618 [158/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:24.618 [159/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:24.618 [160/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:24.618 [161/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:24.618 [162/268] Linking static target lib/librte_timer.a 00:02:24.618 [163/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:24.618 [164/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:24.618 [165/268] Linking static target lib/librte_compressdev.a 00:02:24.618 [166/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:24.618 [167/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:24.618 [168/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:24.618 [169/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:24.618 [170/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:24.618 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:24.618 [172/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:24.618 [173/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:24.618 [174/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:24.618 [175/268] Linking static target lib/librte_dmadev.a 00:02:24.618 [176/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:24.618 [177/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:24.618 [178/268] Linking target lib/librte_kvargs.so.24.1 00:02:24.618 [179/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:24.618 [180/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:24.618 [181/268] Linking target lib/librte_telemetry.so.24.1 00:02:24.618 [182/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:24.618 [183/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:24.618 [184/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:24.618 [185/268] Linking static target lib/librte_power.a 00:02:24.878 [186/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:24.878 [187/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:24.878 [188/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:24.878 [189/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:24.878 [190/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:24.878 [191/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:24.878 [192/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:24.878 [193/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:24.878 [194/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:24.878 [195/268] Linking static target lib/librte_reorder.a 00:02:24.878 [196/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:24.878 [197/268] Linking static target lib/librte_hash.a 00:02:24.878 [198/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:24.878 [199/268] Linking static target drivers/librte_bus_vdev.a 00:02:24.878 [200/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:24.878 [201/268] Linking static target lib/librte_security.a 00:02:24.878 [202/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:24.878 [203/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:24.878 [204/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.878 [205/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:24.878 [206/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:24.878 [207/268] Linking static target drivers/librte_bus_pci.a 00:02:24.878 [208/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.878 [209/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:24.878 [210/268] Linking static target drivers/librte_mempool_ring.a 00:02:25.137 [211/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.137 [212/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.137 [213/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:25.137 [214/268] Linking static target lib/librte_cryptodev.a 00:02:25.137 [215/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.396 [216/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:25.396 [217/268] Linking static target lib/librte_ethdev.a 00:02:25.396 [218/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.396 [219/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.396 [220/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.396 [221/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.655 [222/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.655 [223/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.655 [224/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.655 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:25.655 [226/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:25.915 [227/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:26.850 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:26.850 [229/268] Linking static target lib/librte_vhost.a 00:02:27.109 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:28.488 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.759 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.328 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.328 [234/268] Linking target lib/librte_eal.so.24.1 00:02:34.587 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:34.587 [236/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:34.587 [237/268] Linking target lib/librte_ring.so.24.1 00:02:34.587 [238/268] Linking target lib/librte_meter.so.24.1 00:02:34.587 [239/268] Linking target lib/librte_timer.so.24.1 00:02:34.587 [240/268] Linking target lib/librte_pci.so.24.1 00:02:34.587 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:34.587 [242/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:34.587 [243/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:34.587 [244/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:34.587 [245/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:34.587 [246/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:34.587 [247/268] Linking target lib/librte_rcu.so.24.1 00:02:34.587 [248/268] Linking target lib/librte_mempool.so.24.1 00:02:34.846 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:34.846 [250/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:34.846 [251/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:34.846 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:34.846 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:35.105 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:35.105 [255/268] Linking target lib/librte_net.so.24.1 00:02:35.105 [256/268] Linking target lib/librte_reorder.so.24.1 00:02:35.105 [257/268] Linking target lib/librte_cryptodev.so.24.1 00:02:35.105 [258/268] Linking target lib/librte_compressdev.so.24.1 00:02:35.105 [259/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:35.105 [260/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:35.105 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:35.364 [262/268] Linking target lib/librte_hash.so.24.1 00:02:35.364 [263/268] Linking target lib/librte_ethdev.so.24.1 00:02:35.364 [264/268] Linking target lib/librte_security.so.24.1 00:02:35.364 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:35.364 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:35.364 [267/268] Linking target lib/librte_power.so.24.1 00:02:35.364 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:35.364 INFO: autodetecting backend as ninja 00:02:35.364 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:47.578 CC lib/ut_mock/mock.o 00:02:47.578 CC lib/log/log.o 00:02:47.578 CC lib/log/log_flags.o 00:02:47.578 CC lib/log/log_deprecated.o 00:02:47.578 CC lib/ut/ut.o 00:02:47.578 LIB libspdk_log.a 00:02:47.578 LIB libspdk_ut.a 00:02:47.578 LIB libspdk_ut_mock.a 00:02:47.578 SO libspdk_ut.so.2.0 00:02:47.578 SO libspdk_log.so.7.1 00:02:47.578 SO libspdk_ut_mock.so.6.0 00:02:47.578 SYMLINK libspdk_ut.so 00:02:47.578 SYMLINK libspdk_ut_mock.so 00:02:47.578 SYMLINK libspdk_log.so 00:02:47.578 CC lib/dma/dma.o 00:02:47.578 CC lib/util/base64.o 00:02:47.578 CC lib/util/bit_array.o 00:02:47.578 CC lib/util/crc16.o 00:02:47.578 CC lib/util/cpuset.o 00:02:47.578 CC lib/util/crc32.o 00:02:47.578 CXX lib/trace_parser/trace.o 00:02:47.578 CC lib/ioat/ioat.o 00:02:47.578 CC lib/util/crc32c.o 00:02:47.578 CC lib/util/crc32_ieee.o 00:02:47.578 CC lib/util/crc64.o 00:02:47.578 CC lib/util/dif.o 00:02:47.578 CC lib/util/fd.o 00:02:47.578 CC lib/util/fd_group.o 00:02:47.578 CC lib/util/file.o 00:02:47.578 CC lib/util/hexlify.o 00:02:47.578 CC lib/util/iov.o 00:02:47.578 CC lib/util/math.o 00:02:47.578 CC lib/util/net.o 00:02:47.579 CC lib/util/pipe.o 00:02:47.579 CC lib/util/strerror_tls.o 00:02:47.579 CC lib/util/string.o 00:02:47.579 CC lib/util/uuid.o 00:02:47.579 CC lib/util/xor.o 00:02:47.579 CC lib/util/zipf.o 00:02:47.579 CC lib/util/md5.o 00:02:47.579 CC lib/vfio_user/host/vfio_user_pci.o 00:02:47.579 CC lib/vfio_user/host/vfio_user.o 00:02:47.579 LIB libspdk_dma.a 00:02:47.579 SO libspdk_dma.so.5.0 00:02:47.579 LIB libspdk_ioat.a 00:02:47.579 SYMLINK libspdk_dma.so 00:02:47.579 SO libspdk_ioat.so.7.0 00:02:47.579 SYMLINK libspdk_ioat.so 00:02:47.579 LIB libspdk_vfio_user.a 00:02:47.579 SO libspdk_vfio_user.so.5.0 00:02:47.579 LIB libspdk_util.a 00:02:47.579 SYMLINK libspdk_vfio_user.so 00:02:47.579 SO libspdk_util.so.10.1 00:02:47.579 SYMLINK libspdk_util.so 00:02:47.579 LIB libspdk_trace_parser.a 00:02:47.579 SO libspdk_trace_parser.so.6.0 00:02:47.579 SYMLINK libspdk_trace_parser.so 00:02:47.579 CC lib/json/json_parse.o 00:02:47.579 CC lib/json/json_util.o 00:02:47.579 CC lib/json/json_write.o 00:02:47.579 CC lib/rdma_utils/rdma_utils.o 00:02:47.579 CC lib/idxd/idxd.o 00:02:47.579 CC lib/conf/conf.o 00:02:47.579 CC lib/idxd/idxd_user.o 00:02:47.579 CC lib/idxd/idxd_kernel.o 00:02:47.579 CC lib/vmd/vmd.o 00:02:47.579 CC lib/vmd/led.o 00:02:47.579 CC lib/env_dpdk/env.o 00:02:47.579 CC lib/env_dpdk/memory.o 00:02:47.579 CC lib/env_dpdk/pci.o 00:02:47.579 CC lib/env_dpdk/init.o 00:02:47.579 CC lib/env_dpdk/threads.o 00:02:47.579 CC lib/env_dpdk/pci_ioat.o 00:02:47.579 CC lib/env_dpdk/pci_virtio.o 00:02:47.579 CC lib/env_dpdk/pci_vmd.o 00:02:47.579 CC lib/env_dpdk/pci_idxd.o 00:02:47.579 CC lib/env_dpdk/pci_event.o 00:02:47.579 CC lib/env_dpdk/sigbus_handler.o 00:02:47.579 CC lib/env_dpdk/pci_dpdk.o 00:02:47.579 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:47.579 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:47.579 LIB libspdk_conf.a 00:02:47.838 LIB libspdk_rdma_utils.a 00:02:47.838 SO libspdk_conf.so.6.0 00:02:47.838 LIB libspdk_json.a 00:02:47.838 SO libspdk_rdma_utils.so.1.0 00:02:47.838 SO libspdk_json.so.6.0 00:02:47.838 SYMLINK libspdk_conf.so 00:02:47.838 SYMLINK libspdk_rdma_utils.so 00:02:47.838 SYMLINK libspdk_json.so 00:02:47.838 LIB libspdk_idxd.a 00:02:48.097 SO libspdk_idxd.so.12.1 00:02:48.097 LIB libspdk_vmd.a 00:02:48.097 SO libspdk_vmd.so.6.0 00:02:48.097 SYMLINK libspdk_idxd.so 00:02:48.097 CC lib/rdma_provider/common.o 00:02:48.097 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:48.097 SYMLINK libspdk_vmd.so 00:02:48.097 CC lib/jsonrpc/jsonrpc_server.o 00:02:48.097 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:48.097 CC lib/jsonrpc/jsonrpc_client.o 00:02:48.097 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:48.357 LIB libspdk_rdma_provider.a 00:02:48.357 SO libspdk_rdma_provider.so.7.0 00:02:48.357 LIB libspdk_jsonrpc.a 00:02:48.357 SO libspdk_jsonrpc.so.6.0 00:02:48.357 SYMLINK libspdk_rdma_provider.so 00:02:48.357 SYMLINK libspdk_jsonrpc.so 00:02:48.619 LIB libspdk_env_dpdk.a 00:02:48.619 SO libspdk_env_dpdk.so.15.1 00:02:48.619 SYMLINK libspdk_env_dpdk.so 00:02:48.619 CC lib/rpc/rpc.o 00:02:48.879 LIB libspdk_rpc.a 00:02:48.879 SO libspdk_rpc.so.6.0 00:02:48.879 SYMLINK libspdk_rpc.so 00:02:49.448 CC lib/keyring/keyring.o 00:02:49.448 CC lib/keyring/keyring_rpc.o 00:02:49.448 CC lib/notify/notify.o 00:02:49.448 CC lib/trace/trace.o 00:02:49.448 CC lib/notify/notify_rpc.o 00:02:49.448 CC lib/trace/trace_flags.o 00:02:49.448 CC lib/trace/trace_rpc.o 00:02:49.448 LIB libspdk_notify.a 00:02:49.448 SO libspdk_notify.so.6.0 00:02:49.448 LIB libspdk_keyring.a 00:02:49.448 LIB libspdk_trace.a 00:02:49.448 SO libspdk_keyring.so.2.0 00:02:49.448 SYMLINK libspdk_notify.so 00:02:49.707 SO libspdk_trace.so.11.0 00:02:49.707 SYMLINK libspdk_keyring.so 00:02:49.707 SYMLINK libspdk_trace.so 00:02:49.966 CC lib/thread/thread.o 00:02:49.966 CC lib/thread/iobuf.o 00:02:49.966 CC lib/sock/sock.o 00:02:49.966 CC lib/sock/sock_rpc.o 00:02:50.225 LIB libspdk_sock.a 00:02:50.225 SO libspdk_sock.so.10.0 00:02:50.484 SYMLINK libspdk_sock.so 00:02:50.742 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:50.742 CC lib/nvme/nvme_ctrlr.o 00:02:50.742 CC lib/nvme/nvme_fabric.o 00:02:50.742 CC lib/nvme/nvme_ns_cmd.o 00:02:50.742 CC lib/nvme/nvme_ns.o 00:02:50.742 CC lib/nvme/nvme_pcie_common.o 00:02:50.742 CC lib/nvme/nvme_pcie.o 00:02:50.742 CC lib/nvme/nvme_qpair.o 00:02:50.742 CC lib/nvme/nvme.o 00:02:50.743 CC lib/nvme/nvme_quirks.o 00:02:50.743 CC lib/nvme/nvme_transport.o 00:02:50.743 CC lib/nvme/nvme_discovery.o 00:02:50.743 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:50.743 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:50.743 CC lib/nvme/nvme_tcp.o 00:02:50.743 CC lib/nvme/nvme_opal.o 00:02:50.743 CC lib/nvme/nvme_io_msg.o 00:02:50.743 CC lib/nvme/nvme_poll_group.o 00:02:50.743 CC lib/nvme/nvme_zns.o 00:02:50.743 CC lib/nvme/nvme_stubs.o 00:02:50.743 CC lib/nvme/nvme_auth.o 00:02:50.743 CC lib/nvme/nvme_cuse.o 00:02:50.743 CC lib/nvme/nvme_vfio_user.o 00:02:50.743 CC lib/nvme/nvme_rdma.o 00:02:51.001 LIB libspdk_thread.a 00:02:51.001 SO libspdk_thread.so.11.0 00:02:51.260 SYMLINK libspdk_thread.so 00:02:51.518 CC lib/virtio/virtio.o 00:02:51.518 CC lib/virtio/virtio_pci.o 00:02:51.518 CC lib/virtio/virtio_vhost_user.o 00:02:51.518 CC lib/virtio/virtio_vfio_user.o 00:02:51.518 CC lib/accel/accel.o 00:02:51.518 CC lib/accel/accel_rpc.o 00:02:51.518 CC lib/accel/accel_sw.o 00:02:51.518 CC lib/init/json_config.o 00:02:51.518 CC lib/init/subsystem.o 00:02:51.518 CC lib/init/subsystem_rpc.o 00:02:51.518 CC lib/init/rpc.o 00:02:51.518 CC lib/fsdev/fsdev_io.o 00:02:51.518 CC lib/fsdev/fsdev.o 00:02:51.518 CC lib/fsdev/fsdev_rpc.o 00:02:51.518 CC lib/blob/blobstore.o 00:02:51.518 CC lib/blob/blob_bs_dev.o 00:02:51.518 CC lib/blob/request.o 00:02:51.518 CC lib/blob/zeroes.o 00:02:51.518 CC lib/vfu_tgt/tgt_endpoint.o 00:02:51.518 CC lib/vfu_tgt/tgt_rpc.o 00:02:51.777 LIB libspdk_init.a 00:02:51.777 SO libspdk_init.so.6.0 00:02:51.777 LIB libspdk_virtio.a 00:02:51.777 LIB libspdk_vfu_tgt.a 00:02:51.777 SYMLINK libspdk_init.so 00:02:51.777 SO libspdk_virtio.so.7.0 00:02:51.777 SO libspdk_vfu_tgt.so.3.0 00:02:51.777 SYMLINK libspdk_virtio.so 00:02:51.777 SYMLINK libspdk_vfu_tgt.so 00:02:52.037 LIB libspdk_fsdev.a 00:02:52.037 SO libspdk_fsdev.so.2.0 00:02:52.037 CC lib/event/app.o 00:02:52.037 CC lib/event/reactor.o 00:02:52.037 CC lib/event/log_rpc.o 00:02:52.037 CC lib/event/app_rpc.o 00:02:52.037 CC lib/event/scheduler_static.o 00:02:52.037 SYMLINK libspdk_fsdev.so 00:02:52.296 LIB libspdk_accel.a 00:02:52.296 SO libspdk_accel.so.16.0 00:02:52.296 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:02:52.296 LIB libspdk_event.a 00:02:52.296 SYMLINK libspdk_accel.so 00:02:52.296 LIB libspdk_nvme.a 00:02:52.555 SO libspdk_event.so.14.0 00:02:52.555 SO libspdk_nvme.so.15.0 00:02:52.555 SYMLINK libspdk_event.so 00:02:52.555 SYMLINK libspdk_nvme.so 00:02:52.814 CC lib/bdev/bdev.o 00:02:52.814 CC lib/bdev/bdev_rpc.o 00:02:52.814 CC lib/bdev/bdev_zone.o 00:02:52.814 CC lib/bdev/part.o 00:02:52.814 CC lib/bdev/scsi_nvme.o 00:02:52.814 LIB libspdk_fuse_dispatcher.a 00:02:52.814 SO libspdk_fuse_dispatcher.so.1.0 00:02:52.814 SYMLINK libspdk_fuse_dispatcher.so 00:02:53.751 LIB libspdk_blob.a 00:02:53.751 SO libspdk_blob.so.11.0 00:02:53.751 SYMLINK libspdk_blob.so 00:02:54.010 CC lib/blobfs/blobfs.o 00:02:54.010 CC lib/lvol/lvol.o 00:02:54.010 CC lib/blobfs/tree.o 00:02:54.576 LIB libspdk_bdev.a 00:02:54.576 SO libspdk_bdev.so.17.0 00:02:54.576 LIB libspdk_blobfs.a 00:02:54.576 SO libspdk_blobfs.so.10.0 00:02:54.576 SYMLINK libspdk_bdev.so 00:02:54.576 LIB libspdk_lvol.a 00:02:54.835 SYMLINK libspdk_blobfs.so 00:02:54.835 SO libspdk_lvol.so.10.0 00:02:54.835 SYMLINK libspdk_lvol.so 00:02:55.097 CC lib/nbd/nbd.o 00:02:55.097 CC lib/nbd/nbd_rpc.o 00:02:55.097 CC lib/scsi/dev.o 00:02:55.097 CC lib/ftl/ftl_core.o 00:02:55.097 CC lib/ftl/ftl_init.o 00:02:55.097 CC lib/scsi/lun.o 00:02:55.097 CC lib/nvmf/ctrlr.o 00:02:55.097 CC lib/scsi/port.o 00:02:55.097 CC lib/ftl/ftl_layout.o 00:02:55.097 CC lib/nvmf/ctrlr_discovery.o 00:02:55.097 CC lib/scsi/scsi.o 00:02:55.097 CC lib/ftl/ftl_debug.o 00:02:55.097 CC lib/nvmf/ctrlr_bdev.o 00:02:55.097 CC lib/scsi/scsi_bdev.o 00:02:55.097 CC lib/ftl/ftl_io.o 00:02:55.097 CC lib/scsi/scsi_pr.o 00:02:55.097 CC lib/nvmf/subsystem.o 00:02:55.097 CC lib/ftl/ftl_sb.o 00:02:55.097 CC lib/scsi/scsi_rpc.o 00:02:55.097 CC lib/ftl/ftl_l2p.o 00:02:55.097 CC lib/scsi/task.o 00:02:55.097 CC lib/nvmf/nvmf.o 00:02:55.097 CC lib/nvmf/nvmf_rpc.o 00:02:55.097 CC lib/ftl/ftl_l2p_flat.o 00:02:55.097 CC lib/ublk/ublk.o 00:02:55.097 CC lib/ublk/ublk_rpc.o 00:02:55.097 CC lib/nvmf/transport.o 00:02:55.097 CC lib/ftl/ftl_nv_cache.o 00:02:55.097 CC lib/ftl/ftl_band.o 00:02:55.097 CC lib/nvmf/tcp.o 00:02:55.097 CC lib/nvmf/stubs.o 00:02:55.097 CC lib/nvmf/vfio_user.o 00:02:55.097 CC lib/nvmf/mdns_server.o 00:02:55.097 CC lib/ftl/ftl_band_ops.o 00:02:55.097 CC lib/ftl/ftl_writer.o 00:02:55.097 CC lib/ftl/ftl_rq.o 00:02:55.097 CC lib/nvmf/rdma.o 00:02:55.097 CC lib/ftl/ftl_reloc.o 00:02:55.097 CC lib/nvmf/auth.o 00:02:55.097 CC lib/ftl/ftl_l2p_cache.o 00:02:55.097 CC lib/ftl/ftl_p2l.o 00:02:55.097 CC lib/ftl/ftl_p2l_log.o 00:02:55.097 CC lib/ftl/mngt/ftl_mngt.o 00:02:55.097 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:55.097 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:55.097 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:55.097 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:55.097 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:55.097 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:55.097 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:55.097 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:55.097 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:55.097 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:55.097 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:55.097 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:55.097 CC lib/ftl/utils/ftl_conf.o 00:02:55.097 CC lib/ftl/utils/ftl_md.o 00:02:55.097 CC lib/ftl/utils/ftl_mempool.o 00:02:55.097 CC lib/ftl/utils/ftl_bitmap.o 00:02:55.097 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:55.097 CC lib/ftl/utils/ftl_property.o 00:02:55.097 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:55.097 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:55.097 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:55.097 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:55.097 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:55.097 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:55.097 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:55.097 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:55.097 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:55.097 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:55.097 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:02:55.097 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:02:55.097 CC lib/ftl/base/ftl_base_bdev.o 00:02:55.097 CC lib/ftl/ftl_trace.o 00:02:55.097 CC lib/ftl/base/ftl_base_dev.o 00:02:55.354 LIB libspdk_nbd.a 00:02:55.611 SO libspdk_nbd.so.7.0 00:02:55.611 SYMLINK libspdk_nbd.so 00:02:55.611 LIB libspdk_scsi.a 00:02:55.611 SO libspdk_scsi.so.9.0 00:02:55.611 LIB libspdk_ublk.a 00:02:55.869 SYMLINK libspdk_scsi.so 00:02:55.869 SO libspdk_ublk.so.3.0 00:02:55.869 SYMLINK libspdk_ublk.so 00:02:56.126 LIB libspdk_ftl.a 00:02:56.126 CC lib/vhost/vhost.o 00:02:56.126 CC lib/vhost/vhost_scsi.o 00:02:56.126 CC lib/vhost/vhost_rpc.o 00:02:56.126 CC lib/vhost/vhost_blk.o 00:02:56.126 CC lib/vhost/rte_vhost_user.o 00:02:56.126 CC lib/iscsi/conn.o 00:02:56.126 CC lib/iscsi/init_grp.o 00:02:56.126 CC lib/iscsi/iscsi.o 00:02:56.126 CC lib/iscsi/param.o 00:02:56.126 CC lib/iscsi/portal_grp.o 00:02:56.126 CC lib/iscsi/tgt_node.o 00:02:56.126 CC lib/iscsi/iscsi_subsystem.o 00:02:56.126 CC lib/iscsi/iscsi_rpc.o 00:02:56.126 CC lib/iscsi/task.o 00:02:56.126 SO libspdk_ftl.so.9.0 00:02:56.385 SYMLINK libspdk_ftl.so 00:02:56.951 LIB libspdk_nvmf.a 00:02:56.951 LIB libspdk_vhost.a 00:02:56.951 SO libspdk_vhost.so.8.0 00:02:56.951 SO libspdk_nvmf.so.20.0 00:02:56.951 SYMLINK libspdk_vhost.so 00:02:56.951 LIB libspdk_iscsi.a 00:02:56.951 SYMLINK libspdk_nvmf.so 00:02:57.209 SO libspdk_iscsi.so.8.0 00:02:57.209 SYMLINK libspdk_iscsi.so 00:02:57.773 CC module/env_dpdk/env_dpdk_rpc.o 00:02:57.774 CC module/vfu_device/vfu_virtio_blk.o 00:02:57.774 CC module/vfu_device/vfu_virtio.o 00:02:57.774 CC module/vfu_device/vfu_virtio_scsi.o 00:02:57.774 CC module/vfu_device/vfu_virtio_rpc.o 00:02:57.774 CC module/vfu_device/vfu_virtio_fs.o 00:02:57.774 CC module/accel/ioat/accel_ioat.o 00:02:57.774 CC module/accel/ioat/accel_ioat_rpc.o 00:02:57.774 CC module/accel/iaa/accel_iaa.o 00:02:57.774 CC module/accel/iaa/accel_iaa_rpc.o 00:02:57.774 CC module/accel/dsa/accel_dsa.o 00:02:57.774 CC module/accel/error/accel_error.o 00:02:57.774 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:57.774 CC module/accel/error/accel_error_rpc.o 00:02:57.774 CC module/blob/bdev/blob_bdev.o 00:02:57.774 CC module/accel/dsa/accel_dsa_rpc.o 00:02:57.774 LIB libspdk_env_dpdk_rpc.a 00:02:57.774 CC module/scheduler/gscheduler/gscheduler.o 00:02:57.774 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:57.774 CC module/sock/posix/posix.o 00:02:57.774 CC module/keyring/file/keyring.o 00:02:57.774 CC module/fsdev/aio/fsdev_aio.o 00:02:57.774 CC module/keyring/file/keyring_rpc.o 00:02:57.774 CC module/fsdev/aio/fsdev_aio_rpc.o 00:02:57.774 CC module/fsdev/aio/linux_aio_mgr.o 00:02:57.774 CC module/keyring/linux/keyring.o 00:02:57.774 CC module/keyring/linux/keyring_rpc.o 00:02:58.030 SO libspdk_env_dpdk_rpc.so.6.0 00:02:58.030 SYMLINK libspdk_env_dpdk_rpc.so 00:02:58.030 LIB libspdk_keyring_linux.a 00:02:58.030 LIB libspdk_scheduler_gscheduler.a 00:02:58.030 LIB libspdk_keyring_file.a 00:02:58.030 LIB libspdk_accel_ioat.a 00:02:58.030 SO libspdk_keyring_linux.so.1.0 00:02:58.030 LIB libspdk_scheduler_dpdk_governor.a 00:02:58.030 SO libspdk_scheduler_gscheduler.so.4.0 00:02:58.030 SO libspdk_keyring_file.so.2.0 00:02:58.030 LIB libspdk_accel_error.a 00:02:58.030 LIB libspdk_accel_iaa.a 00:02:58.030 SO libspdk_accel_ioat.so.6.0 00:02:58.030 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:58.030 LIB libspdk_scheduler_dynamic.a 00:02:58.030 SYMLINK libspdk_keyring_linux.so 00:02:58.030 SO libspdk_accel_iaa.so.3.0 00:02:58.030 SO libspdk_accel_error.so.2.0 00:02:58.030 SO libspdk_scheduler_dynamic.so.4.0 00:02:58.030 SYMLINK libspdk_keyring_file.so 00:02:58.030 SYMLINK libspdk_scheduler_gscheduler.so 00:02:58.030 LIB libspdk_blob_bdev.a 00:02:58.288 SYMLINK libspdk_accel_ioat.so 00:02:58.288 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:58.288 LIB libspdk_accel_dsa.a 00:02:58.288 SO libspdk_blob_bdev.so.11.0 00:02:58.288 SYMLINK libspdk_accel_error.so 00:02:58.288 SYMLINK libspdk_accel_iaa.so 00:02:58.288 SO libspdk_accel_dsa.so.5.0 00:02:58.288 SYMLINK libspdk_scheduler_dynamic.so 00:02:58.288 SYMLINK libspdk_blob_bdev.so 00:02:58.288 LIB libspdk_vfu_device.a 00:02:58.288 SYMLINK libspdk_accel_dsa.so 00:02:58.288 SO libspdk_vfu_device.so.3.0 00:02:58.288 SYMLINK libspdk_vfu_device.so 00:02:58.545 LIB libspdk_fsdev_aio.a 00:02:58.545 SO libspdk_fsdev_aio.so.1.0 00:02:58.545 LIB libspdk_sock_posix.a 00:02:58.545 SO libspdk_sock_posix.so.6.0 00:02:58.545 SYMLINK libspdk_fsdev_aio.so 00:02:58.545 SYMLINK libspdk_sock_posix.so 00:02:58.803 CC module/bdev/error/vbdev_error_rpc.o 00:02:58.803 CC module/bdev/error/vbdev_error.o 00:02:58.803 CC module/bdev/null/bdev_null.o 00:02:58.803 CC module/bdev/null/bdev_null_rpc.o 00:02:58.803 CC module/bdev/malloc/bdev_malloc.o 00:02:58.803 CC module/bdev/delay/vbdev_delay.o 00:02:58.803 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:58.803 CC module/bdev/raid/bdev_raid.o 00:02:58.803 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:58.803 CC module/bdev/raid/bdev_raid_rpc.o 00:02:58.803 CC module/bdev/raid/bdev_raid_sb.o 00:02:58.803 CC module/bdev/raid/raid0.o 00:02:58.803 CC module/bdev/raid/raid1.o 00:02:58.803 CC module/bdev/raid/concat.o 00:02:58.803 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:58.803 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:58.803 CC module/bdev/iscsi/bdev_iscsi.o 00:02:58.803 CC module/bdev/gpt/vbdev_gpt.o 00:02:58.803 CC module/bdev/gpt/gpt.o 00:02:58.803 CC module/bdev/aio/bdev_aio.o 00:02:58.803 CC module/bdev/aio/bdev_aio_rpc.o 00:02:58.803 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:58.803 CC module/blobfs/bdev/blobfs_bdev.o 00:02:58.803 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:58.803 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:58.803 CC module/bdev/nvme/bdev_nvme.o 00:02:58.803 CC module/bdev/nvme/bdev_mdns_client.o 00:02:58.803 CC module/bdev/nvme/nvme_rpc.o 00:02:58.803 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:58.803 CC module/bdev/nvme/vbdev_opal.o 00:02:58.803 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:58.803 CC module/bdev/split/vbdev_split.o 00:02:58.803 CC module/bdev/lvol/vbdev_lvol.o 00:02:58.803 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:58.803 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:58.803 CC module/bdev/split/vbdev_split_rpc.o 00:02:58.803 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:58.803 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:58.803 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:58.803 CC module/bdev/passthru/vbdev_passthru.o 00:02:58.803 CC module/bdev/ftl/bdev_ftl.o 00:02:58.803 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:59.061 LIB libspdk_blobfs_bdev.a 00:02:59.061 LIB libspdk_bdev_error.a 00:02:59.061 LIB libspdk_bdev_null.a 00:02:59.061 SO libspdk_blobfs_bdev.so.6.0 00:02:59.061 LIB libspdk_bdev_split.a 00:02:59.061 SO libspdk_bdev_error.so.6.0 00:02:59.061 SO libspdk_bdev_split.so.6.0 00:02:59.061 SO libspdk_bdev_null.so.6.0 00:02:59.061 LIB libspdk_bdev_gpt.a 00:02:59.061 LIB libspdk_bdev_passthru.a 00:02:59.061 LIB libspdk_bdev_ftl.a 00:02:59.061 SYMLINK libspdk_blobfs_bdev.so 00:02:59.061 LIB libspdk_bdev_zone_block.a 00:02:59.061 SO libspdk_bdev_gpt.so.6.0 00:02:59.061 SYMLINK libspdk_bdev_split.so 00:02:59.061 SYMLINK libspdk_bdev_null.so 00:02:59.061 LIB libspdk_bdev_iscsi.a 00:02:59.061 SYMLINK libspdk_bdev_error.so 00:02:59.061 SO libspdk_bdev_ftl.so.6.0 00:02:59.061 SO libspdk_bdev_passthru.so.6.0 00:02:59.061 SO libspdk_bdev_zone_block.so.6.0 00:02:59.061 LIB libspdk_bdev_aio.a 00:02:59.061 LIB libspdk_bdev_malloc.a 00:02:59.061 SO libspdk_bdev_iscsi.so.6.0 00:02:59.061 SYMLINK libspdk_bdev_gpt.so 00:02:59.061 SO libspdk_bdev_aio.so.6.0 00:02:59.061 SO libspdk_bdev_malloc.so.6.0 00:02:59.061 LIB libspdk_bdev_delay.a 00:02:59.061 SYMLINK libspdk_bdev_ftl.so 00:02:59.061 SYMLINK libspdk_bdev_passthru.so 00:02:59.061 SYMLINK libspdk_bdev_zone_block.so 00:02:59.061 SYMLINK libspdk_bdev_iscsi.so 00:02:59.061 SO libspdk_bdev_delay.so.6.0 00:02:59.319 SYMLINK libspdk_bdev_aio.so 00:02:59.320 SYMLINK libspdk_bdev_malloc.so 00:02:59.320 SYMLINK libspdk_bdev_delay.so 00:02:59.320 LIB libspdk_bdev_lvol.a 00:02:59.320 LIB libspdk_bdev_virtio.a 00:02:59.320 SO libspdk_bdev_lvol.so.6.0 00:02:59.320 SO libspdk_bdev_virtio.so.6.0 00:02:59.320 SYMLINK libspdk_bdev_lvol.so 00:02:59.320 SYMLINK libspdk_bdev_virtio.so 00:02:59.578 LIB libspdk_bdev_raid.a 00:02:59.578 SO libspdk_bdev_raid.so.6.0 00:02:59.578 SYMLINK libspdk_bdev_raid.so 00:03:00.515 LIB libspdk_bdev_nvme.a 00:03:00.515 SO libspdk_bdev_nvme.so.7.1 00:03:00.774 SYMLINK libspdk_bdev_nvme.so 00:03:01.344 CC module/event/subsystems/vmd/vmd.o 00:03:01.344 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:01.344 CC module/event/subsystems/keyring/keyring.o 00:03:01.344 CC module/event/subsystems/fsdev/fsdev.o 00:03:01.344 CC module/event/subsystems/sock/sock.o 00:03:01.344 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:01.344 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:03:01.344 CC module/event/subsystems/iobuf/iobuf.o 00:03:01.344 CC module/event/subsystems/scheduler/scheduler.o 00:03:01.344 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:01.603 LIB libspdk_event_keyring.a 00:03:01.603 LIB libspdk_event_sock.a 00:03:01.603 LIB libspdk_event_vmd.a 00:03:01.603 LIB libspdk_event_fsdev.a 00:03:01.603 LIB libspdk_event_vhost_blk.a 00:03:01.603 SO libspdk_event_keyring.so.1.0 00:03:01.603 LIB libspdk_event_vfu_tgt.a 00:03:01.603 LIB libspdk_event_scheduler.a 00:03:01.603 LIB libspdk_event_iobuf.a 00:03:01.603 SO libspdk_event_sock.so.5.0 00:03:01.603 SO libspdk_event_fsdev.so.1.0 00:03:01.603 SO libspdk_event_vmd.so.6.0 00:03:01.603 SO libspdk_event_vhost_blk.so.3.0 00:03:01.603 SO libspdk_event_vfu_tgt.so.3.0 00:03:01.603 SO libspdk_event_iobuf.so.3.0 00:03:01.603 SO libspdk_event_scheduler.so.4.0 00:03:01.603 SYMLINK libspdk_event_keyring.so 00:03:01.603 SYMLINK libspdk_event_sock.so 00:03:01.603 SYMLINK libspdk_event_vmd.so 00:03:01.603 SYMLINK libspdk_event_fsdev.so 00:03:01.603 SYMLINK libspdk_event_vhost_blk.so 00:03:01.603 SYMLINK libspdk_event_vfu_tgt.so 00:03:01.603 SYMLINK libspdk_event_scheduler.so 00:03:01.603 SYMLINK libspdk_event_iobuf.so 00:03:01.862 CC module/event/subsystems/accel/accel.o 00:03:02.121 LIB libspdk_event_accel.a 00:03:02.121 SO libspdk_event_accel.so.6.0 00:03:02.121 SYMLINK libspdk_event_accel.so 00:03:02.381 CC module/event/subsystems/bdev/bdev.o 00:03:02.640 LIB libspdk_event_bdev.a 00:03:02.640 SO libspdk_event_bdev.so.6.0 00:03:02.640 SYMLINK libspdk_event_bdev.so 00:03:03.208 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:03.208 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:03.208 CC module/event/subsystems/nbd/nbd.o 00:03:03.208 CC module/event/subsystems/ublk/ublk.o 00:03:03.209 CC module/event/subsystems/scsi/scsi.o 00:03:03.209 LIB libspdk_event_ublk.a 00:03:03.209 LIB libspdk_event_nbd.a 00:03:03.209 LIB libspdk_event_scsi.a 00:03:03.209 SO libspdk_event_ublk.so.3.0 00:03:03.209 SO libspdk_event_nbd.so.6.0 00:03:03.209 SO libspdk_event_scsi.so.6.0 00:03:03.209 LIB libspdk_event_nvmf.a 00:03:03.209 SO libspdk_event_nvmf.so.6.0 00:03:03.209 SYMLINK libspdk_event_ublk.so 00:03:03.209 SYMLINK libspdk_event_nbd.so 00:03:03.209 SYMLINK libspdk_event_scsi.so 00:03:03.209 SYMLINK libspdk_event_nvmf.so 00:03:03.776 CC module/event/subsystems/iscsi/iscsi.o 00:03:03.776 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:03.776 LIB libspdk_event_vhost_scsi.a 00:03:03.776 LIB libspdk_event_iscsi.a 00:03:03.776 SO libspdk_event_vhost_scsi.so.3.0 00:03:03.776 SO libspdk_event_iscsi.so.6.0 00:03:03.776 SYMLINK libspdk_event_vhost_scsi.so 00:03:03.776 SYMLINK libspdk_event_iscsi.so 00:03:04.035 SO libspdk.so.6.0 00:03:04.035 SYMLINK libspdk.so 00:03:04.294 CXX app/trace/trace.o 00:03:04.294 CC app/trace_record/trace_record.o 00:03:04.294 CC test/rpc_client/rpc_client_test.o 00:03:04.294 CC app/spdk_top/spdk_top.o 00:03:04.294 CC app/spdk_nvme_perf/perf.o 00:03:04.294 CC app/spdk_lspci/spdk_lspci.o 00:03:04.294 TEST_HEADER include/spdk/accel_module.h 00:03:04.294 TEST_HEADER include/spdk/accel.h 00:03:04.294 TEST_HEADER include/spdk/barrier.h 00:03:04.294 TEST_HEADER include/spdk/assert.h 00:03:04.294 TEST_HEADER include/spdk/bdev.h 00:03:04.294 TEST_HEADER include/spdk/base64.h 00:03:04.294 TEST_HEADER include/spdk/bdev_module.h 00:03:04.294 TEST_HEADER include/spdk/bdev_zone.h 00:03:04.294 TEST_HEADER include/spdk/bit_array.h 00:03:04.294 TEST_HEADER include/spdk/bit_pool.h 00:03:04.294 TEST_HEADER include/spdk/blob_bdev.h 00:03:04.294 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:04.294 CC app/spdk_nvme_discover/discovery_aer.o 00:03:04.294 CC app/spdk_nvme_identify/identify.o 00:03:04.294 TEST_HEADER include/spdk/blob.h 00:03:04.294 TEST_HEADER include/spdk/blobfs.h 00:03:04.294 TEST_HEADER include/spdk/conf.h 00:03:04.294 TEST_HEADER include/spdk/config.h 00:03:04.294 TEST_HEADER include/spdk/crc16.h 00:03:04.294 TEST_HEADER include/spdk/cpuset.h 00:03:04.294 TEST_HEADER include/spdk/crc32.h 00:03:04.294 TEST_HEADER include/spdk/dif.h 00:03:04.294 TEST_HEADER include/spdk/crc64.h 00:03:04.294 TEST_HEADER include/spdk/dma.h 00:03:04.294 TEST_HEADER include/spdk/env_dpdk.h 00:03:04.294 TEST_HEADER include/spdk/endian.h 00:03:04.294 TEST_HEADER include/spdk/env.h 00:03:04.294 TEST_HEADER include/spdk/event.h 00:03:04.294 TEST_HEADER include/spdk/fd_group.h 00:03:04.294 TEST_HEADER include/spdk/fd.h 00:03:04.294 TEST_HEADER include/spdk/file.h 00:03:04.294 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:04.294 TEST_HEADER include/spdk/fsdev_module.h 00:03:04.294 TEST_HEADER include/spdk/fsdev.h 00:03:04.294 TEST_HEADER include/spdk/ftl.h 00:03:04.294 TEST_HEADER include/spdk/fuse_dispatcher.h 00:03:04.294 TEST_HEADER include/spdk/gpt_spec.h 00:03:04.295 TEST_HEADER include/spdk/hexlify.h 00:03:04.295 TEST_HEADER include/spdk/histogram_data.h 00:03:04.295 CC app/nvmf_tgt/nvmf_main.o 00:03:04.295 CC app/iscsi_tgt/iscsi_tgt.o 00:03:04.295 TEST_HEADER include/spdk/idxd.h 00:03:04.295 TEST_HEADER include/spdk/idxd_spec.h 00:03:04.555 TEST_HEADER include/spdk/init.h 00:03:04.555 TEST_HEADER include/spdk/ioat.h 00:03:04.555 TEST_HEADER include/spdk/ioat_spec.h 00:03:04.555 TEST_HEADER include/spdk/json.h 00:03:04.555 TEST_HEADER include/spdk/jsonrpc.h 00:03:04.555 TEST_HEADER include/spdk/iscsi_spec.h 00:03:04.555 TEST_HEADER include/spdk/keyring_module.h 00:03:04.555 TEST_HEADER include/spdk/keyring.h 00:03:04.555 CC app/spdk_dd/spdk_dd.o 00:03:04.555 TEST_HEADER include/spdk/likely.h 00:03:04.555 TEST_HEADER include/spdk/log.h 00:03:04.555 TEST_HEADER include/spdk/lvol.h 00:03:04.555 TEST_HEADER include/spdk/md5.h 00:03:04.555 TEST_HEADER include/spdk/memory.h 00:03:04.555 TEST_HEADER include/spdk/mmio.h 00:03:04.555 TEST_HEADER include/spdk/nbd.h 00:03:04.555 TEST_HEADER include/spdk/net.h 00:03:04.555 TEST_HEADER include/spdk/notify.h 00:03:04.555 TEST_HEADER include/spdk/nvme_intel.h 00:03:04.555 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:04.555 TEST_HEADER include/spdk/nvme.h 00:03:04.555 TEST_HEADER include/spdk/nvme_spec.h 00:03:04.555 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:04.555 TEST_HEADER include/spdk/nvme_zns.h 00:03:04.555 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:04.555 TEST_HEADER include/spdk/nvmf.h 00:03:04.555 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:04.555 TEST_HEADER include/spdk/nvmf_spec.h 00:03:04.555 TEST_HEADER include/spdk/nvmf_transport.h 00:03:04.555 TEST_HEADER include/spdk/pci_ids.h 00:03:04.555 TEST_HEADER include/spdk/opal_spec.h 00:03:04.555 TEST_HEADER include/spdk/opal.h 00:03:04.555 TEST_HEADER include/spdk/pipe.h 00:03:04.555 TEST_HEADER include/spdk/queue.h 00:03:04.555 TEST_HEADER include/spdk/reduce.h 00:03:04.555 TEST_HEADER include/spdk/scheduler.h 00:03:04.555 TEST_HEADER include/spdk/scsi.h 00:03:04.555 TEST_HEADER include/spdk/rpc.h 00:03:04.555 CC app/spdk_tgt/spdk_tgt.o 00:03:04.555 TEST_HEADER include/spdk/scsi_spec.h 00:03:04.555 TEST_HEADER include/spdk/sock.h 00:03:04.555 TEST_HEADER include/spdk/stdinc.h 00:03:04.555 TEST_HEADER include/spdk/string.h 00:03:04.555 TEST_HEADER include/spdk/thread.h 00:03:04.555 TEST_HEADER include/spdk/trace_parser.h 00:03:04.555 TEST_HEADER include/spdk/trace.h 00:03:04.555 TEST_HEADER include/spdk/tree.h 00:03:04.555 TEST_HEADER include/spdk/util.h 00:03:04.555 TEST_HEADER include/spdk/ublk.h 00:03:04.555 TEST_HEADER include/spdk/version.h 00:03:04.555 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:04.555 TEST_HEADER include/spdk/vhost.h 00:03:04.555 TEST_HEADER include/spdk/vmd.h 00:03:04.555 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:04.555 TEST_HEADER include/spdk/uuid.h 00:03:04.555 TEST_HEADER include/spdk/xor.h 00:03:04.555 TEST_HEADER include/spdk/zipf.h 00:03:04.555 CXX test/cpp_headers/accel_module.o 00:03:04.555 CXX test/cpp_headers/accel.o 00:03:04.555 CXX test/cpp_headers/barrier.o 00:03:04.555 CXX test/cpp_headers/assert.o 00:03:04.555 CXX test/cpp_headers/bdev.o 00:03:04.555 CXX test/cpp_headers/bdev_module.o 00:03:04.555 CXX test/cpp_headers/bit_array.o 00:03:04.555 CXX test/cpp_headers/bdev_zone.o 00:03:04.555 CXX test/cpp_headers/bit_pool.o 00:03:04.555 CXX test/cpp_headers/base64.o 00:03:04.555 CXX test/cpp_headers/blobfs_bdev.o 00:03:04.555 CXX test/cpp_headers/blob_bdev.o 00:03:04.555 CXX test/cpp_headers/blobfs.o 00:03:04.555 CXX test/cpp_headers/blob.o 00:03:04.555 CXX test/cpp_headers/conf.o 00:03:04.555 CXX test/cpp_headers/config.o 00:03:04.555 CXX test/cpp_headers/cpuset.o 00:03:04.555 CXX test/cpp_headers/crc32.o 00:03:04.555 CXX test/cpp_headers/crc16.o 00:03:04.555 CXX test/cpp_headers/dif.o 00:03:04.555 CXX test/cpp_headers/crc64.o 00:03:04.555 CXX test/cpp_headers/dma.o 00:03:04.555 CXX test/cpp_headers/env.o 00:03:04.555 CXX test/cpp_headers/endian.o 00:03:04.555 CXX test/cpp_headers/event.o 00:03:04.555 CXX test/cpp_headers/fd_group.o 00:03:04.555 CXX test/cpp_headers/env_dpdk.o 00:03:04.555 CXX test/cpp_headers/fd.o 00:03:04.555 CXX test/cpp_headers/fsdev.o 00:03:04.556 CXX test/cpp_headers/fsdev_module.o 00:03:04.556 CXX test/cpp_headers/file.o 00:03:04.556 CXX test/cpp_headers/fuse_dispatcher.o 00:03:04.556 CXX test/cpp_headers/ftl.o 00:03:04.556 CXX test/cpp_headers/gpt_spec.o 00:03:04.556 CXX test/cpp_headers/histogram_data.o 00:03:04.556 CXX test/cpp_headers/hexlify.o 00:03:04.556 CXX test/cpp_headers/idxd.o 00:03:04.556 CXX test/cpp_headers/idxd_spec.o 00:03:04.556 CXX test/cpp_headers/init.o 00:03:04.556 CXX test/cpp_headers/ioat.o 00:03:04.556 CXX test/cpp_headers/ioat_spec.o 00:03:04.556 CXX test/cpp_headers/iscsi_spec.o 00:03:04.556 CXX test/cpp_headers/json.o 00:03:04.556 CXX test/cpp_headers/keyring.o 00:03:04.556 CXX test/cpp_headers/keyring_module.o 00:03:04.556 CXX test/cpp_headers/jsonrpc.o 00:03:04.556 CXX test/cpp_headers/likely.o 00:03:04.556 CXX test/cpp_headers/log.o 00:03:04.556 CXX test/cpp_headers/lvol.o 00:03:04.556 CXX test/cpp_headers/md5.o 00:03:04.556 CXX test/cpp_headers/memory.o 00:03:04.556 CXX test/cpp_headers/mmio.o 00:03:04.556 CXX test/cpp_headers/nbd.o 00:03:04.556 CXX test/cpp_headers/net.o 00:03:04.556 CXX test/cpp_headers/nvme.o 00:03:04.556 CXX test/cpp_headers/notify.o 00:03:04.556 CXX test/cpp_headers/nvme_intel.o 00:03:04.556 CXX test/cpp_headers/nvme_ocssd.o 00:03:04.556 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:04.556 CXX test/cpp_headers/nvme_spec.o 00:03:04.556 CXX test/cpp_headers/nvme_zns.o 00:03:04.556 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:04.556 CXX test/cpp_headers/nvmf_cmd.o 00:03:04.556 CXX test/cpp_headers/nvmf.o 00:03:04.556 CXX test/cpp_headers/nvmf_spec.o 00:03:04.556 CXX test/cpp_headers/nvmf_transport.o 00:03:04.556 CXX test/cpp_headers/opal.o 00:03:04.556 CC examples/ioat/verify/verify.o 00:03:04.556 CC examples/util/zipf/zipf.o 00:03:04.556 CC examples/ioat/perf/perf.o 00:03:04.556 CC test/thread/poller_perf/poller_perf.o 00:03:04.556 CC test/env/pci/pci_ut.o 00:03:04.556 CC test/env/memory/memory_ut.o 00:03:04.556 CXX test/cpp_headers/opal_spec.o 00:03:04.556 CC test/env/vtophys/vtophys.o 00:03:04.556 CC app/fio/nvme/fio_plugin.o 00:03:04.556 CC test/app/histogram_perf/histogram_perf.o 00:03:04.556 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:04.556 CC test/dma/test_dma/test_dma.o 00:03:04.556 CC test/app/jsoncat/jsoncat.o 00:03:04.556 CC test/app/stub/stub.o 00:03:04.831 CC test/app/bdev_svc/bdev_svc.o 00:03:04.831 CC app/fio/bdev/fio_plugin.o 00:03:04.831 LINK spdk_lspci 00:03:04.831 LINK rpc_client_test 00:03:04.831 LINK interrupt_tgt 00:03:04.831 LINK spdk_nvme_discover 00:03:04.831 LINK nvmf_tgt 00:03:05.100 LINK spdk_trace_record 00:03:05.100 CC test/env/mem_callbacks/mem_callbacks.o 00:03:05.100 LINK iscsi_tgt 00:03:05.100 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:05.100 LINK poller_perf 00:03:05.100 CXX test/cpp_headers/pci_ids.o 00:03:05.100 CXX test/cpp_headers/pipe.o 00:03:05.100 CXX test/cpp_headers/queue.o 00:03:05.100 CXX test/cpp_headers/reduce.o 00:03:05.100 CXX test/cpp_headers/rpc.o 00:03:05.100 CXX test/cpp_headers/scheduler.o 00:03:05.100 CXX test/cpp_headers/scsi.o 00:03:05.100 CXX test/cpp_headers/scsi_spec.o 00:03:05.100 CXX test/cpp_headers/sock.o 00:03:05.100 CXX test/cpp_headers/stdinc.o 00:03:05.100 CXX test/cpp_headers/string.o 00:03:05.100 CXX test/cpp_headers/thread.o 00:03:05.100 CXX test/cpp_headers/trace.o 00:03:05.100 CXX test/cpp_headers/trace_parser.o 00:03:05.100 CXX test/cpp_headers/tree.o 00:03:05.100 CXX test/cpp_headers/ublk.o 00:03:05.100 CXX test/cpp_headers/util.o 00:03:05.100 CXX test/cpp_headers/uuid.o 00:03:05.100 CXX test/cpp_headers/version.o 00:03:05.100 CXX test/cpp_headers/vfio_user_pci.o 00:03:05.100 CXX test/cpp_headers/vfio_user_spec.o 00:03:05.100 LINK jsoncat 00:03:05.100 LINK zipf 00:03:05.100 CXX test/cpp_headers/vhost.o 00:03:05.100 CXX test/cpp_headers/vmd.o 00:03:05.100 LINK vtophys 00:03:05.100 CXX test/cpp_headers/xor.o 00:03:05.100 CXX test/cpp_headers/zipf.o 00:03:05.100 LINK ioat_perf 00:03:05.100 LINK stub 00:03:05.100 LINK histogram_perf 00:03:05.359 LINK env_dpdk_post_init 00:03:05.359 LINK bdev_svc 00:03:05.359 LINK spdk_tgt 00:03:05.359 LINK verify 00:03:05.359 LINK spdk_trace 00:03:05.359 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:05.359 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:05.359 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:05.359 LINK spdk_dd 00:03:05.359 LINK pci_ut 00:03:05.618 LINK spdk_bdev 00:03:05.618 LINK test_dma 00:03:05.618 LINK spdk_nvme 00:03:05.618 CC test/event/reactor_perf/reactor_perf.o 00:03:05.618 CC app/vhost/vhost.o 00:03:05.618 CC test/event/reactor/reactor.o 00:03:05.618 CC examples/vmd/lsvmd/lsvmd.o 00:03:05.618 CC examples/sock/hello_world/hello_sock.o 00:03:05.618 CC test/event/event_perf/event_perf.o 00:03:05.618 CC examples/idxd/perf/perf.o 00:03:05.618 CC examples/vmd/led/led.o 00:03:05.618 CC test/event/app_repeat/app_repeat.o 00:03:05.618 CC examples/thread/thread/thread_ex.o 00:03:05.618 CC test/event/scheduler/scheduler.o 00:03:05.618 LINK spdk_nvme_perf 00:03:05.618 LINK spdk_nvme_identify 00:03:05.618 LINK nvme_fuzz 00:03:05.876 LINK vhost_fuzz 00:03:05.876 LINK mem_callbacks 00:03:05.876 LINK spdk_top 00:03:05.876 LINK lsvmd 00:03:05.876 LINK reactor_perf 00:03:05.876 LINK reactor 00:03:05.876 LINK event_perf 00:03:05.876 LINK app_repeat 00:03:05.876 LINK led 00:03:05.876 LINK vhost 00:03:05.876 LINK hello_sock 00:03:05.876 LINK scheduler 00:03:05.876 LINK thread 00:03:05.876 LINK idxd_perf 00:03:06.135 CC test/nvme/simple_copy/simple_copy.o 00:03:06.135 CC test/nvme/overhead/overhead.o 00:03:06.135 CC test/nvme/sgl/sgl.o 00:03:06.135 CC test/nvme/startup/startup.o 00:03:06.135 CC test/nvme/e2edp/nvme_dp.o 00:03:06.135 CC test/nvme/boot_partition/boot_partition.o 00:03:06.135 CC test/nvme/reset/reset.o 00:03:06.135 CC test/nvme/connect_stress/connect_stress.o 00:03:06.135 CC test/nvme/fdp/fdp.o 00:03:06.135 CC test/nvme/aer/aer.o 00:03:06.135 CC test/nvme/reserve/reserve.o 00:03:06.135 CC test/nvme/compliance/nvme_compliance.o 00:03:06.135 CC test/nvme/fused_ordering/fused_ordering.o 00:03:06.135 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:06.135 CC test/nvme/cuse/cuse.o 00:03:06.135 CC test/nvme/err_injection/err_injection.o 00:03:06.135 LINK memory_ut 00:03:06.135 CC test/blobfs/mkfs/mkfs.o 00:03:06.135 CC test/accel/dif/dif.o 00:03:06.394 LINK startup 00:03:06.394 CC test/lvol/esnap/esnap.o 00:03:06.394 LINK boot_partition 00:03:06.394 LINK reserve 00:03:06.394 LINK connect_stress 00:03:06.394 LINK simple_copy 00:03:06.394 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:06.394 LINK doorbell_aers 00:03:06.394 CC examples/nvme/reconnect/reconnect.o 00:03:06.394 CC examples/nvme/arbitration/arbitration.o 00:03:06.394 CC examples/nvme/abort/abort.o 00:03:06.394 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:06.394 LINK err_injection 00:03:06.394 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:06.394 CC examples/nvme/hello_world/hello_world.o 00:03:06.394 CC examples/nvme/hotplug/hotplug.o 00:03:06.394 LINK reset 00:03:06.394 LINK fused_ordering 00:03:06.394 LINK sgl 00:03:06.394 LINK overhead 00:03:06.394 LINK nvme_dp 00:03:06.394 LINK mkfs 00:03:06.394 LINK aer 00:03:06.394 LINK nvme_compliance 00:03:06.394 CC examples/accel/perf/accel_perf.o 00:03:06.394 LINK fdp 00:03:06.394 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:06.394 CC examples/blob/cli/blobcli.o 00:03:06.652 CC examples/blob/hello_world/hello_blob.o 00:03:06.652 LINK pmr_persistence 00:03:06.652 LINK cmb_copy 00:03:06.652 LINK hotplug 00:03:06.652 LINK hello_world 00:03:06.652 LINK arbitration 00:03:06.652 LINK reconnect 00:03:06.652 LINK abort 00:03:06.652 LINK nvme_manage 00:03:06.652 LINK hello_fsdev 00:03:06.652 LINK dif 00:03:06.652 LINK hello_blob 00:03:06.652 LINK iscsi_fuzz 00:03:06.911 LINK accel_perf 00:03:06.911 LINK blobcli 00:03:07.170 LINK cuse 00:03:07.170 CC test/bdev/bdevio/bdevio.o 00:03:07.429 CC examples/bdev/hello_world/hello_bdev.o 00:03:07.429 CC examples/bdev/bdevperf/bdevperf.o 00:03:07.429 LINK hello_bdev 00:03:07.688 LINK bdevio 00:03:07.947 LINK bdevperf 00:03:08.516 CC examples/nvmf/nvmf/nvmf.o 00:03:08.774 LINK nvmf 00:03:09.711 LINK esnap 00:03:09.971 00:03:09.971 real 0m55.742s 00:03:09.971 user 8m14.750s 00:03:09.971 sys 3m44.521s 00:03:09.971 10:20:50 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:09.971 10:20:50 make -- common/autotest_common.sh@10 -- $ set +x 00:03:09.971 ************************************ 00:03:09.971 END TEST make 00:03:09.971 ************************************ 00:03:10.231 10:20:50 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:10.231 10:20:50 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:10.231 10:20:50 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:10.231 10:20:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.231 10:20:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:03:10.231 10:20:50 -- pm/common@44 -- $ pid=2951533 00:03:10.231 10:20:50 -- pm/common@50 -- $ kill -TERM 2951533 00:03:10.231 10:20:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.231 10:20:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:03:10.231 10:20:50 -- pm/common@44 -- $ pid=2951535 00:03:10.231 10:20:50 -- pm/common@50 -- $ kill -TERM 2951535 00:03:10.231 10:20:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.231 10:20:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:03:10.231 10:20:50 -- pm/common@44 -- $ pid=2951536 00:03:10.231 10:20:50 -- pm/common@50 -- $ kill -TERM 2951536 00:03:10.231 10:20:50 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.231 10:20:50 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:03:10.231 10:20:50 -- pm/common@44 -- $ pid=2951562 00:03:10.231 10:20:50 -- pm/common@50 -- $ sudo -E kill -TERM 2951562 00:03:10.231 10:20:50 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:10.231 10:20:50 -- spdk/autorun.sh@27 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:03:10.231 10:20:50 -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:03:10.231 10:20:50 -- common/autotest_common.sh@1693 -- # lcov --version 00:03:10.231 10:20:50 -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:03:10.231 10:20:50 -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:03:10.231 10:20:50 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:10.231 10:20:50 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:10.231 10:20:50 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:10.231 10:20:50 -- scripts/common.sh@336 -- # IFS=.-: 00:03:10.231 10:20:50 -- scripts/common.sh@336 -- # read -ra ver1 00:03:10.231 10:20:50 -- scripts/common.sh@337 -- # IFS=.-: 00:03:10.231 10:20:50 -- scripts/common.sh@337 -- # read -ra ver2 00:03:10.231 10:20:50 -- scripts/common.sh@338 -- # local 'op=<' 00:03:10.231 10:20:50 -- scripts/common.sh@340 -- # ver1_l=2 00:03:10.231 10:20:50 -- scripts/common.sh@341 -- # ver2_l=1 00:03:10.231 10:20:50 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:10.231 10:20:50 -- scripts/common.sh@344 -- # case "$op" in 00:03:10.231 10:20:50 -- scripts/common.sh@345 -- # : 1 00:03:10.231 10:20:50 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:10.231 10:20:50 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:10.231 10:20:50 -- scripts/common.sh@365 -- # decimal 1 00:03:10.231 10:20:50 -- scripts/common.sh@353 -- # local d=1 00:03:10.231 10:20:50 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:10.231 10:20:50 -- scripts/common.sh@355 -- # echo 1 00:03:10.231 10:20:50 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:10.231 10:20:50 -- scripts/common.sh@366 -- # decimal 2 00:03:10.231 10:20:50 -- scripts/common.sh@353 -- # local d=2 00:03:10.231 10:20:50 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:10.231 10:20:50 -- scripts/common.sh@355 -- # echo 2 00:03:10.231 10:20:50 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:10.231 10:20:50 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:10.231 10:20:50 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:10.231 10:20:50 -- scripts/common.sh@368 -- # return 0 00:03:10.231 10:20:50 -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:10.231 10:20:50 -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:03:10.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.231 --rc genhtml_branch_coverage=1 00:03:10.231 --rc genhtml_function_coverage=1 00:03:10.231 --rc genhtml_legend=1 00:03:10.231 --rc geninfo_all_blocks=1 00:03:10.231 --rc geninfo_unexecuted_blocks=1 00:03:10.231 00:03:10.231 ' 00:03:10.231 10:20:50 -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:03:10.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.231 --rc genhtml_branch_coverage=1 00:03:10.231 --rc genhtml_function_coverage=1 00:03:10.231 --rc genhtml_legend=1 00:03:10.231 --rc geninfo_all_blocks=1 00:03:10.231 --rc geninfo_unexecuted_blocks=1 00:03:10.231 00:03:10.231 ' 00:03:10.231 10:20:50 -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:03:10.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.231 --rc genhtml_branch_coverage=1 00:03:10.231 --rc genhtml_function_coverage=1 00:03:10.231 --rc genhtml_legend=1 00:03:10.231 --rc geninfo_all_blocks=1 00:03:10.231 --rc geninfo_unexecuted_blocks=1 00:03:10.231 00:03:10.231 ' 00:03:10.231 10:20:50 -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:03:10.231 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:10.231 --rc genhtml_branch_coverage=1 00:03:10.231 --rc genhtml_function_coverage=1 00:03:10.231 --rc genhtml_legend=1 00:03:10.231 --rc geninfo_all_blocks=1 00:03:10.231 --rc geninfo_unexecuted_blocks=1 00:03:10.231 00:03:10.231 ' 00:03:10.231 10:20:50 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:03:10.231 10:20:50 -- nvmf/common.sh@7 -- # uname -s 00:03:10.231 10:20:50 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:10.231 10:20:50 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:10.231 10:20:50 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:10.231 10:20:50 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:10.231 10:20:50 -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:10.231 10:20:50 -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:03:10.231 10:20:50 -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:10.231 10:20:50 -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:03:10.231 10:20:50 -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:03:10.231 10:20:50 -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:03:10.231 10:20:50 -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:10.231 10:20:50 -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:03:10.231 10:20:50 -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:03:10.231 10:20:50 -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:10.231 10:20:50 -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:03:10.231 10:20:50 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:10.231 10:20:50 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:10.231 10:20:50 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:10.231 10:20:50 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:10.231 10:20:50 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.231 10:20:50 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.231 10:20:50 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.231 10:20:50 -- paths/export.sh@5 -- # export PATH 00:03:10.231 10:20:50 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:10.231 10:20:50 -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:03:10.231 10:20:50 -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:03:10.231 10:20:50 -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:03:10.231 10:20:50 -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:03:10.231 10:20:50 -- nvmf/common.sh@50 -- # : 0 00:03:10.231 10:20:50 -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:03:10.231 10:20:50 -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:03:10.231 10:20:50 -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:03:10.231 10:20:50 -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:10.231 10:20:50 -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:10.231 10:20:50 -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:03:10.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:03:10.231 10:20:50 -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:03:10.231 10:20:50 -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:03:10.231 10:20:50 -- nvmf/common.sh@54 -- # have_pci_nics=0 00:03:10.231 10:20:50 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:10.231 10:20:50 -- spdk/autotest.sh@32 -- # uname -s 00:03:10.231 10:20:50 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:10.231 10:20:50 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:10.231 10:20:50 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:10.490 10:20:50 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:03:10.490 10:20:50 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:03:10.490 10:20:50 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:10.490 10:20:50 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:10.490 10:20:50 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:10.490 10:20:50 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:10.490 10:20:50 -- spdk/autotest.sh@48 -- # udevadm_pid=3013992 00:03:10.490 10:20:50 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:10.491 10:20:50 -- pm/common@17 -- # local monitor 00:03:10.491 10:20:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.491 10:20:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.491 10:20:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.491 10:20:50 -- pm/common@21 -- # date +%s 00:03:10.491 10:20:50 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:10.491 10:20:50 -- pm/common@21 -- # date +%s 00:03:10.491 10:20:50 -- pm/common@25 -- # sleep 1 00:03:10.491 10:20:50 -- pm/common@21 -- # date +%s 00:03:10.491 10:20:50 -- pm/common@21 -- # date +%s 00:03:10.491 10:20:50 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732094450 00:03:10.491 10:20:50 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732094450 00:03:10.491 10:20:50 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732094450 00:03:10.491 10:20:50 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1732094450 00:03:10.491 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732094450_collect-cpu-load.pm.log 00:03:10.491 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732094450_collect-vmstat.pm.log 00:03:10.491 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732094450_collect-cpu-temp.pm.log 00:03:10.491 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1732094450_collect-bmc-pm.bmc.pm.log 00:03:11.537 10:20:51 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:11.537 10:20:51 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:11.537 10:20:51 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:11.537 10:20:51 -- common/autotest_common.sh@10 -- # set +x 00:03:11.537 10:20:51 -- spdk/autotest.sh@59 -- # create_test_list 00:03:11.537 10:20:51 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:11.537 10:20:51 -- common/autotest_common.sh@10 -- # set +x 00:03:11.537 10:20:52 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:03:11.537 10:20:52 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:11.537 10:20:52 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:11.537 10:20:52 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:03:11.537 10:20:52 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:03:11.537 10:20:52 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:11.537 10:20:52 -- common/autotest_common.sh@1457 -- # uname 00:03:11.537 10:20:52 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:11.537 10:20:52 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:11.537 10:20:52 -- common/autotest_common.sh@1477 -- # uname 00:03:11.537 10:20:52 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:11.537 10:20:52 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:11.537 10:20:52 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:11.537 lcov: LCOV version 1.15 00:03:11.537 10:20:52 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:03:23.785 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:23.785 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:03:35.989 10:21:16 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:35.989 10:21:16 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:35.989 10:21:16 -- common/autotest_common.sh@10 -- # set +x 00:03:35.989 10:21:16 -- spdk/autotest.sh@78 -- # rm -f 00:03:35.989 10:21:16 -- spdk/autotest.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:39.278 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:03:39.278 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:03:39.278 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:03:39.278 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:03:39.278 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:03:39.278 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:03:39.278 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:03:39.278 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:03:39.278 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:03:39.278 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:03:39.278 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:03:39.278 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:03:39.278 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:03:39.278 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:03:39.278 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:03:39.278 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:03:39.278 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:03:39.278 10:21:19 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:39.278 10:21:19 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:39.278 10:21:19 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:39.278 10:21:19 -- common/autotest_common.sh@1658 -- # local nvme bdf 00:03:39.278 10:21:19 -- common/autotest_common.sh@1660 -- # for nvme in /sys/block/nvme* 00:03:39.278 10:21:19 -- common/autotest_common.sh@1661 -- # is_block_zoned nvme0n1 00:03:39.278 10:21:19 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:39.278 10:21:19 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:39.278 10:21:19 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:39.278 10:21:19 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:39.278 10:21:19 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:39.278 10:21:19 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:39.278 10:21:19 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:39.278 10:21:19 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:39.278 10:21:19 -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:39.278 No valid GPT data, bailing 00:03:39.278 10:21:19 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:39.278 10:21:19 -- scripts/common.sh@394 -- # pt= 00:03:39.278 10:21:19 -- scripts/common.sh@395 -- # return 1 00:03:39.278 10:21:19 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:39.278 1+0 records in 00:03:39.278 1+0 records out 00:03:39.278 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00508995 s, 206 MB/s 00:03:39.278 10:21:19 -- spdk/autotest.sh@105 -- # sync 00:03:39.278 10:21:19 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:39.278 10:21:19 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:39.278 10:21:19 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:45.878 10:21:25 -- spdk/autotest.sh@111 -- # uname -s 00:03:45.878 10:21:25 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:45.878 10:21:25 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:45.878 10:21:25 -- spdk/autotest.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:47.783 Hugepages 00:03:47.783 node hugesize free / total 00:03:47.783 node0 1048576kB 0 / 0 00:03:47.783 node0 2048kB 0 / 0 00:03:47.783 node1 1048576kB 0 / 0 00:03:47.783 node1 2048kB 0 / 0 00:03:47.783 00:03:47.783 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:47.783 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:03:47.783 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:03:47.783 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:03:47.783 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:03:47.783 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:03:47.783 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:03:47.783 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:03:47.783 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:03:47.783 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:03:47.783 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:03:47.783 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:03:47.783 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:03:47.783 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:03:47.783 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:03:47.783 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:03:47.783 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:03:47.783 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:03:48.042 10:21:28 -- spdk/autotest.sh@117 -- # uname -s 00:03:48.042 10:21:28 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:48.042 10:21:28 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:48.042 10:21:28 -- common/autotest_common.sh@1516 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:51.337 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:51.337 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:51.337 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:51.337 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:51.337 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:51.337 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:51.337 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:51.337 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:51.337 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:51.337 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:51.337 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:51.337 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:51.338 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:51.338 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:51.338 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:51.338 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:52.274 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:52.533 10:21:33 -- common/autotest_common.sh@1517 -- # sleep 1 00:03:53.471 10:21:34 -- common/autotest_common.sh@1518 -- # bdfs=() 00:03:53.471 10:21:34 -- common/autotest_common.sh@1518 -- # local bdfs 00:03:53.471 10:21:34 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:03:53.471 10:21:34 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:03:53.471 10:21:34 -- common/autotest_common.sh@1498 -- # bdfs=() 00:03:53.471 10:21:34 -- common/autotest_common.sh@1498 -- # local bdfs 00:03:53.471 10:21:34 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:03:53.471 10:21:34 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:03:53.471 10:21:34 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:03:53.471 10:21:34 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:03:53.471 10:21:34 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:03:53.471 10:21:34 -- common/autotest_common.sh@1522 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:56.788 Waiting for block devices as requested 00:03:56.788 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:03:56.788 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:56.788 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:56.788 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:56.788 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:56.788 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:56.788 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:57.047 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:57.047 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:57.047 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:03:57.047 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:03:57.306 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:03:57.306 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:03:57.306 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:03:57.565 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:03:57.565 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:03:57.565 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:03:57.824 10:21:38 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:03:57.824 10:21:38 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:03:57.824 10:21:38 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 00:03:57.824 10:21:38 -- common/autotest_common.sh@1487 -- # grep 0000:5e:00.0/nvme/nvme 00:03:57.824 10:21:38 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:57.824 10:21:38 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:03:57.824 10:21:38 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:03:57.824 10:21:38 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:03:57.824 10:21:38 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:03:57.824 10:21:38 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:03:57.824 10:21:38 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:03:57.824 10:21:38 -- common/autotest_common.sh@1531 -- # grep oacs 00:03:57.824 10:21:38 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:03:57.824 10:21:38 -- common/autotest_common.sh@1531 -- # oacs=' 0xe' 00:03:57.824 10:21:38 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:03:57.824 10:21:38 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:03:57.824 10:21:38 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:03:57.824 10:21:38 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:03:57.824 10:21:38 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:03:57.824 10:21:38 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:03:57.824 10:21:38 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:03:57.824 10:21:38 -- common/autotest_common.sh@1543 -- # continue 00:03:57.824 10:21:38 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:03:57.824 10:21:38 -- common/autotest_common.sh@732 -- # xtrace_disable 00:03:57.824 10:21:38 -- common/autotest_common.sh@10 -- # set +x 00:03:57.824 10:21:38 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:03:57.824 10:21:38 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:57.824 10:21:38 -- common/autotest_common.sh@10 -- # set +x 00:03:57.824 10:21:38 -- spdk/autotest.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:01.116 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:01.116 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:01.116 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:01.116 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:01.116 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:01.116 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:01.116 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:01.116 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:01.116 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:01.116 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:01.116 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:01.116 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:01.116 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:01.116 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:01.116 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:01.116 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:02.495 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:02.495 10:21:42 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:02.495 10:21:42 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:02.495 10:21:42 -- common/autotest_common.sh@10 -- # set +x 00:04:02.495 10:21:42 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:02.495 10:21:42 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:02.495 10:21:42 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:02.495 10:21:42 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:02.495 10:21:42 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:02.495 10:21:42 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:02.495 10:21:42 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:02.495 10:21:42 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:02.495 10:21:42 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:02.495 10:21:42 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:02.495 10:21:42 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:02.495 10:21:42 -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:02.495 10:21:42 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:02.495 10:21:43 -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:04:02.495 10:21:43 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:04:02.495 10:21:43 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:02.495 10:21:43 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:02.495 10:21:43 -- common/autotest_common.sh@1566 -- # device=0x0a54 00:04:02.495 10:21:43 -- common/autotest_common.sh@1567 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:02.495 10:21:43 -- common/autotest_common.sh@1568 -- # bdfs+=($bdf) 00:04:02.495 10:21:43 -- common/autotest_common.sh@1572 -- # (( 1 > 0 )) 00:04:02.495 10:21:43 -- common/autotest_common.sh@1573 -- # printf '%s\n' 0000:5e:00.0 00:04:02.495 10:21:43 -- common/autotest_common.sh@1579 -- # [[ -z 0000:5e:00.0 ]] 00:04:02.495 10:21:43 -- common/autotest_common.sh@1584 -- # spdk_tgt_pid=3028755 00:04:02.495 10:21:43 -- common/autotest_common.sh@1583 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:02.495 10:21:43 -- common/autotest_common.sh@1585 -- # waitforlisten 3028755 00:04:02.495 10:21:43 -- common/autotest_common.sh@835 -- # '[' -z 3028755 ']' 00:04:02.495 10:21:43 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:02.495 10:21:43 -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:02.495 10:21:43 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:02.495 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:02.495 10:21:43 -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:02.495 10:21:43 -- common/autotest_common.sh@10 -- # set +x 00:04:02.495 [2024-11-20 10:21:43.094263] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:02.495 [2024-11-20 10:21:43.094305] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3028755 ] 00:04:02.495 [2024-11-20 10:21:43.167223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:02.495 [2024-11-20 10:21:43.209419] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:02.754 10:21:43 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:02.754 10:21:43 -- common/autotest_common.sh@868 -- # return 0 00:04:02.754 10:21:43 -- common/autotest_common.sh@1587 -- # bdf_id=0 00:04:02.754 10:21:43 -- common/autotest_common.sh@1588 -- # for bdf in "${bdfs[@]}" 00:04:02.754 10:21:43 -- common/autotest_common.sh@1589 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:06.041 nvme0n1 00:04:06.041 10:21:46 -- common/autotest_common.sh@1591 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:06.041 [2024-11-20 10:21:46.584864] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:06.041 request: 00:04:06.041 { 00:04:06.041 "nvme_ctrlr_name": "nvme0", 00:04:06.041 "password": "test", 00:04:06.041 "method": "bdev_nvme_opal_revert", 00:04:06.041 "req_id": 1 00:04:06.041 } 00:04:06.041 Got JSON-RPC error response 00:04:06.041 response: 00:04:06.041 { 00:04:06.041 "code": -32602, 00:04:06.041 "message": "Invalid parameters" 00:04:06.041 } 00:04:06.041 10:21:46 -- common/autotest_common.sh@1591 -- # true 00:04:06.041 10:21:46 -- common/autotest_common.sh@1592 -- # (( ++bdf_id )) 00:04:06.041 10:21:46 -- common/autotest_common.sh@1595 -- # killprocess 3028755 00:04:06.041 10:21:46 -- common/autotest_common.sh@954 -- # '[' -z 3028755 ']' 00:04:06.041 10:21:46 -- common/autotest_common.sh@958 -- # kill -0 3028755 00:04:06.041 10:21:46 -- common/autotest_common.sh@959 -- # uname 00:04:06.041 10:21:46 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:06.041 10:21:46 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3028755 00:04:06.041 10:21:46 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:06.041 10:21:46 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:06.041 10:21:46 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3028755' 00:04:06.041 killing process with pid 3028755 00:04:06.041 10:21:46 -- common/autotest_common.sh@973 -- # kill 3028755 00:04:06.041 10:21:46 -- common/autotest_common.sh@978 -- # wait 3028755 00:04:08.576 10:21:48 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:08.576 10:21:48 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:08.576 10:21:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:08.576 10:21:48 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:08.576 10:21:48 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:08.576 10:21:48 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.576 10:21:48 -- common/autotest_common.sh@10 -- # set +x 00:04:08.576 10:21:48 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:08.576 10:21:48 -- spdk/autotest.sh@155 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:08.576 10:21:48 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.576 10:21:48 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.576 10:21:48 -- common/autotest_common.sh@10 -- # set +x 00:04:08.576 ************************************ 00:04:08.576 START TEST env 00:04:08.576 ************************************ 00:04:08.576 10:21:48 env -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:08.576 * Looking for test storage... 00:04:08.576 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:08.576 10:21:48 env -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:08.576 10:21:48 env -- common/autotest_common.sh@1693 -- # lcov --version 00:04:08.576 10:21:48 env -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:08.576 10:21:48 env -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:08.576 10:21:48 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.576 10:21:48 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.576 10:21:48 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.576 10:21:48 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.576 10:21:48 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.576 10:21:48 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.576 10:21:48 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.576 10:21:48 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.576 10:21:48 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.576 10:21:48 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.576 10:21:48 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.576 10:21:48 env -- scripts/common.sh@344 -- # case "$op" in 00:04:08.576 10:21:48 env -- scripts/common.sh@345 -- # : 1 00:04:08.576 10:21:48 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.576 10:21:48 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.576 10:21:48 env -- scripts/common.sh@365 -- # decimal 1 00:04:08.576 10:21:48 env -- scripts/common.sh@353 -- # local d=1 00:04:08.576 10:21:48 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.576 10:21:48 env -- scripts/common.sh@355 -- # echo 1 00:04:08.576 10:21:48 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.576 10:21:48 env -- scripts/common.sh@366 -- # decimal 2 00:04:08.576 10:21:48 env -- scripts/common.sh@353 -- # local d=2 00:04:08.576 10:21:48 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.576 10:21:48 env -- scripts/common.sh@355 -- # echo 2 00:04:08.576 10:21:48 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.576 10:21:48 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.576 10:21:48 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.576 10:21:48 env -- scripts/common.sh@368 -- # return 0 00:04:08.576 10:21:48 env -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.576 10:21:48 env -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:08.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.576 --rc genhtml_branch_coverage=1 00:04:08.576 --rc genhtml_function_coverage=1 00:04:08.576 --rc genhtml_legend=1 00:04:08.576 --rc geninfo_all_blocks=1 00:04:08.576 --rc geninfo_unexecuted_blocks=1 00:04:08.576 00:04:08.576 ' 00:04:08.576 10:21:48 env -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:08.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.576 --rc genhtml_branch_coverage=1 00:04:08.576 --rc genhtml_function_coverage=1 00:04:08.576 --rc genhtml_legend=1 00:04:08.576 --rc geninfo_all_blocks=1 00:04:08.576 --rc geninfo_unexecuted_blocks=1 00:04:08.576 00:04:08.576 ' 00:04:08.576 10:21:48 env -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:08.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.576 --rc genhtml_branch_coverage=1 00:04:08.576 --rc genhtml_function_coverage=1 00:04:08.576 --rc genhtml_legend=1 00:04:08.576 --rc geninfo_all_blocks=1 00:04:08.576 --rc geninfo_unexecuted_blocks=1 00:04:08.576 00:04:08.576 ' 00:04:08.576 10:21:48 env -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:08.576 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.576 --rc genhtml_branch_coverage=1 00:04:08.576 --rc genhtml_function_coverage=1 00:04:08.576 --rc genhtml_legend=1 00:04:08.576 --rc geninfo_all_blocks=1 00:04:08.576 --rc geninfo_unexecuted_blocks=1 00:04:08.576 00:04:08.576 ' 00:04:08.576 10:21:48 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:08.576 10:21:48 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.576 10:21:48 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.576 10:21:48 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.576 ************************************ 00:04:08.576 START TEST env_memory 00:04:08.576 ************************************ 00:04:08.576 10:21:49 env.env_memory -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:08.576 00:04:08.576 00:04:08.576 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.576 http://cunit.sourceforge.net/ 00:04:08.576 00:04:08.576 00:04:08.576 Suite: memory 00:04:08.576 Test: alloc and free memory map ...[2024-11-20 10:21:49.057951] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:08.576 passed 00:04:08.576 Test: mem map translation ...[2024-11-20 10:21:49.076009] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:08.576 [2024-11-20 10:21:49.076023] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:08.576 [2024-11-20 10:21:49.076056] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:08.576 [2024-11-20 10:21:49.076061] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:08.576 passed 00:04:08.576 Test: mem map registration ...[2024-11-20 10:21:49.111660] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:08.576 [2024-11-20 10:21:49.111673] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:08.576 passed 00:04:08.576 Test: mem map adjacent registrations ...passed 00:04:08.576 00:04:08.576 Run Summary: Type Total Ran Passed Failed Inactive 00:04:08.576 suites 1 1 n/a 0 0 00:04:08.576 tests 4 4 4 0 0 00:04:08.576 asserts 152 152 152 0 n/a 00:04:08.576 00:04:08.576 Elapsed time = 0.133 seconds 00:04:08.576 00:04:08.576 real 0m0.146s 00:04:08.576 user 0m0.138s 00:04:08.576 sys 0m0.007s 00:04:08.576 10:21:49 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:08.576 10:21:49 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:08.577 ************************************ 00:04:08.577 END TEST env_memory 00:04:08.577 ************************************ 00:04:08.577 10:21:49 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:08.577 10:21:49 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.577 10:21:49 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.577 10:21:49 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.577 ************************************ 00:04:08.577 START TEST env_vtophys 00:04:08.577 ************************************ 00:04:08.577 10:21:49 env.env_vtophys -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:08.577 EAL: lib.eal log level changed from notice to debug 00:04:08.577 EAL: Detected lcore 0 as core 0 on socket 0 00:04:08.577 EAL: Detected lcore 1 as core 1 on socket 0 00:04:08.577 EAL: Detected lcore 2 as core 2 on socket 0 00:04:08.577 EAL: Detected lcore 3 as core 3 on socket 0 00:04:08.577 EAL: Detected lcore 4 as core 4 on socket 0 00:04:08.577 EAL: Detected lcore 5 as core 5 on socket 0 00:04:08.577 EAL: Detected lcore 6 as core 6 on socket 0 00:04:08.577 EAL: Detected lcore 7 as core 8 on socket 0 00:04:08.577 EAL: Detected lcore 8 as core 9 on socket 0 00:04:08.577 EAL: Detected lcore 9 as core 10 on socket 0 00:04:08.577 EAL: Detected lcore 10 as core 11 on socket 0 00:04:08.577 EAL: Detected lcore 11 as core 12 on socket 0 00:04:08.577 EAL: Detected lcore 12 as core 13 on socket 0 00:04:08.577 EAL: Detected lcore 13 as core 16 on socket 0 00:04:08.577 EAL: Detected lcore 14 as core 17 on socket 0 00:04:08.577 EAL: Detected lcore 15 as core 18 on socket 0 00:04:08.577 EAL: Detected lcore 16 as core 19 on socket 0 00:04:08.577 EAL: Detected lcore 17 as core 20 on socket 0 00:04:08.577 EAL: Detected lcore 18 as core 21 on socket 0 00:04:08.577 EAL: Detected lcore 19 as core 25 on socket 0 00:04:08.577 EAL: Detected lcore 20 as core 26 on socket 0 00:04:08.577 EAL: Detected lcore 21 as core 27 on socket 0 00:04:08.577 EAL: Detected lcore 22 as core 28 on socket 0 00:04:08.577 EAL: Detected lcore 23 as core 29 on socket 0 00:04:08.577 EAL: Detected lcore 24 as core 0 on socket 1 00:04:08.577 EAL: Detected lcore 25 as core 1 on socket 1 00:04:08.577 EAL: Detected lcore 26 as core 2 on socket 1 00:04:08.577 EAL: Detected lcore 27 as core 3 on socket 1 00:04:08.577 EAL: Detected lcore 28 as core 4 on socket 1 00:04:08.577 EAL: Detected lcore 29 as core 5 on socket 1 00:04:08.577 EAL: Detected lcore 30 as core 6 on socket 1 00:04:08.577 EAL: Detected lcore 31 as core 8 on socket 1 00:04:08.577 EAL: Detected lcore 32 as core 10 on socket 1 00:04:08.577 EAL: Detected lcore 33 as core 11 on socket 1 00:04:08.577 EAL: Detected lcore 34 as core 12 on socket 1 00:04:08.577 EAL: Detected lcore 35 as core 13 on socket 1 00:04:08.577 EAL: Detected lcore 36 as core 16 on socket 1 00:04:08.577 EAL: Detected lcore 37 as core 17 on socket 1 00:04:08.577 EAL: Detected lcore 38 as core 18 on socket 1 00:04:08.577 EAL: Detected lcore 39 as core 19 on socket 1 00:04:08.577 EAL: Detected lcore 40 as core 20 on socket 1 00:04:08.577 EAL: Detected lcore 41 as core 21 on socket 1 00:04:08.577 EAL: Detected lcore 42 as core 24 on socket 1 00:04:08.577 EAL: Detected lcore 43 as core 25 on socket 1 00:04:08.577 EAL: Detected lcore 44 as core 26 on socket 1 00:04:08.577 EAL: Detected lcore 45 as core 27 on socket 1 00:04:08.577 EAL: Detected lcore 46 as core 28 on socket 1 00:04:08.577 EAL: Detected lcore 47 as core 29 on socket 1 00:04:08.577 EAL: Detected lcore 48 as core 0 on socket 0 00:04:08.577 EAL: Detected lcore 49 as core 1 on socket 0 00:04:08.577 EAL: Detected lcore 50 as core 2 on socket 0 00:04:08.577 EAL: Detected lcore 51 as core 3 on socket 0 00:04:08.577 EAL: Detected lcore 52 as core 4 on socket 0 00:04:08.577 EAL: Detected lcore 53 as core 5 on socket 0 00:04:08.577 EAL: Detected lcore 54 as core 6 on socket 0 00:04:08.577 EAL: Detected lcore 55 as core 8 on socket 0 00:04:08.577 EAL: Detected lcore 56 as core 9 on socket 0 00:04:08.577 EAL: Detected lcore 57 as core 10 on socket 0 00:04:08.577 EAL: Detected lcore 58 as core 11 on socket 0 00:04:08.577 EAL: Detected lcore 59 as core 12 on socket 0 00:04:08.577 EAL: Detected lcore 60 as core 13 on socket 0 00:04:08.577 EAL: Detected lcore 61 as core 16 on socket 0 00:04:08.577 EAL: Detected lcore 62 as core 17 on socket 0 00:04:08.577 EAL: Detected lcore 63 as core 18 on socket 0 00:04:08.577 EAL: Detected lcore 64 as core 19 on socket 0 00:04:08.577 EAL: Detected lcore 65 as core 20 on socket 0 00:04:08.577 EAL: Detected lcore 66 as core 21 on socket 0 00:04:08.577 EAL: Detected lcore 67 as core 25 on socket 0 00:04:08.577 EAL: Detected lcore 68 as core 26 on socket 0 00:04:08.577 EAL: Detected lcore 69 as core 27 on socket 0 00:04:08.577 EAL: Detected lcore 70 as core 28 on socket 0 00:04:08.577 EAL: Detected lcore 71 as core 29 on socket 0 00:04:08.577 EAL: Detected lcore 72 as core 0 on socket 1 00:04:08.577 EAL: Detected lcore 73 as core 1 on socket 1 00:04:08.577 EAL: Detected lcore 74 as core 2 on socket 1 00:04:08.577 EAL: Detected lcore 75 as core 3 on socket 1 00:04:08.577 EAL: Detected lcore 76 as core 4 on socket 1 00:04:08.577 EAL: Detected lcore 77 as core 5 on socket 1 00:04:08.577 EAL: Detected lcore 78 as core 6 on socket 1 00:04:08.577 EAL: Detected lcore 79 as core 8 on socket 1 00:04:08.577 EAL: Detected lcore 80 as core 10 on socket 1 00:04:08.577 EAL: Detected lcore 81 as core 11 on socket 1 00:04:08.577 EAL: Detected lcore 82 as core 12 on socket 1 00:04:08.577 EAL: Detected lcore 83 as core 13 on socket 1 00:04:08.577 EAL: Detected lcore 84 as core 16 on socket 1 00:04:08.577 EAL: Detected lcore 85 as core 17 on socket 1 00:04:08.577 EAL: Detected lcore 86 as core 18 on socket 1 00:04:08.577 EAL: Detected lcore 87 as core 19 on socket 1 00:04:08.577 EAL: Detected lcore 88 as core 20 on socket 1 00:04:08.577 EAL: Detected lcore 89 as core 21 on socket 1 00:04:08.577 EAL: Detected lcore 90 as core 24 on socket 1 00:04:08.577 EAL: Detected lcore 91 as core 25 on socket 1 00:04:08.577 EAL: Detected lcore 92 as core 26 on socket 1 00:04:08.577 EAL: Detected lcore 93 as core 27 on socket 1 00:04:08.577 EAL: Detected lcore 94 as core 28 on socket 1 00:04:08.577 EAL: Detected lcore 95 as core 29 on socket 1 00:04:08.577 EAL: Maximum logical cores by configuration: 128 00:04:08.577 EAL: Detected CPU lcores: 96 00:04:08.577 EAL: Detected NUMA nodes: 2 00:04:08.577 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:08.577 EAL: Detected shared linkage of DPDK 00:04:08.577 EAL: No shared files mode enabled, IPC will be disabled 00:04:08.577 EAL: Bus pci wants IOVA as 'DC' 00:04:08.577 EAL: Buses did not request a specific IOVA mode. 00:04:08.577 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:08.577 EAL: Selected IOVA mode 'VA' 00:04:08.577 EAL: Probing VFIO support... 00:04:08.577 EAL: IOMMU type 1 (Type 1) is supported 00:04:08.577 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:08.577 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:08.577 EAL: VFIO support initialized 00:04:08.577 EAL: Ask a virtual area of 0x2e000 bytes 00:04:08.577 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:08.577 EAL: Setting up physically contiguous memory... 00:04:08.577 EAL: Setting maximum number of open files to 524288 00:04:08.577 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:08.577 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:08.577 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:08.577 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.577 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:08.577 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.577 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.577 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:08.577 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:08.577 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.577 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:08.577 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.577 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.577 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:08.577 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:08.577 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.577 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:08.577 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.577 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.577 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:08.577 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:08.578 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.578 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:08.578 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:08.578 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.578 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:08.578 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:08.578 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:08.578 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.578 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:08.578 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:08.578 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.578 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:08.578 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:08.578 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.578 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:08.578 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:08.578 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.578 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:08.578 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:08.578 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.578 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:08.578 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:08.578 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.578 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:08.578 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:08.578 EAL: Ask a virtual area of 0x61000 bytes 00:04:08.578 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:08.578 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:08.578 EAL: Ask a virtual area of 0x400000000 bytes 00:04:08.578 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:08.578 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:08.578 EAL: Hugepages will be freed exactly as allocated. 00:04:08.578 EAL: No shared files mode enabled, IPC is disabled 00:04:08.578 EAL: No shared files mode enabled, IPC is disabled 00:04:08.578 EAL: TSC frequency is ~2100000 KHz 00:04:08.578 EAL: Main lcore 0 is ready (tid=7f2cd5e3fa00;cpuset=[0]) 00:04:08.578 EAL: Trying to obtain current memory policy. 00:04:08.578 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.578 EAL: Restoring previous memory policy: 0 00:04:08.578 EAL: request: mp_malloc_sync 00:04:08.578 EAL: No shared files mode enabled, IPC is disabled 00:04:08.578 EAL: Heap on socket 0 was expanded by 2MB 00:04:08.578 EAL: No shared files mode enabled, IPC is disabled 00:04:08.837 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:08.837 EAL: Mem event callback 'spdk:(nil)' registered 00:04:08.837 00:04:08.837 00:04:08.837 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.837 http://cunit.sourceforge.net/ 00:04:08.837 00:04:08.837 00:04:08.837 Suite: components_suite 00:04:08.837 Test: vtophys_malloc_test ...passed 00:04:08.837 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:08.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.837 EAL: Restoring previous memory policy: 4 00:04:08.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.837 EAL: request: mp_malloc_sync 00:04:08.837 EAL: No shared files mode enabled, IPC is disabled 00:04:08.837 EAL: Heap on socket 0 was expanded by 4MB 00:04:08.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.837 EAL: request: mp_malloc_sync 00:04:08.837 EAL: No shared files mode enabled, IPC is disabled 00:04:08.837 EAL: Heap on socket 0 was shrunk by 4MB 00:04:08.837 EAL: Trying to obtain current memory policy. 00:04:08.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.837 EAL: Restoring previous memory policy: 4 00:04:08.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.837 EAL: request: mp_malloc_sync 00:04:08.837 EAL: No shared files mode enabled, IPC is disabled 00:04:08.837 EAL: Heap on socket 0 was expanded by 6MB 00:04:08.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.837 EAL: request: mp_malloc_sync 00:04:08.837 EAL: No shared files mode enabled, IPC is disabled 00:04:08.837 EAL: Heap on socket 0 was shrunk by 6MB 00:04:08.837 EAL: Trying to obtain current memory policy. 00:04:08.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.837 EAL: Restoring previous memory policy: 4 00:04:08.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.837 EAL: request: mp_malloc_sync 00:04:08.837 EAL: No shared files mode enabled, IPC is disabled 00:04:08.837 EAL: Heap on socket 0 was expanded by 10MB 00:04:08.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.837 EAL: request: mp_malloc_sync 00:04:08.837 EAL: No shared files mode enabled, IPC is disabled 00:04:08.837 EAL: Heap on socket 0 was shrunk by 10MB 00:04:08.837 EAL: Trying to obtain current memory policy. 00:04:08.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.837 EAL: Restoring previous memory policy: 4 00:04:08.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.837 EAL: request: mp_malloc_sync 00:04:08.837 EAL: No shared files mode enabled, IPC is disabled 00:04:08.837 EAL: Heap on socket 0 was expanded by 18MB 00:04:08.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.837 EAL: request: mp_malloc_sync 00:04:08.837 EAL: No shared files mode enabled, IPC is disabled 00:04:08.837 EAL: Heap on socket 0 was shrunk by 18MB 00:04:08.837 EAL: Trying to obtain current memory policy. 00:04:08.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.837 EAL: Restoring previous memory policy: 4 00:04:08.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.837 EAL: request: mp_malloc_sync 00:04:08.837 EAL: No shared files mode enabled, IPC is disabled 00:04:08.837 EAL: Heap on socket 0 was expanded by 34MB 00:04:08.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.837 EAL: request: mp_malloc_sync 00:04:08.837 EAL: No shared files mode enabled, IPC is disabled 00:04:08.837 EAL: Heap on socket 0 was shrunk by 34MB 00:04:08.837 EAL: Trying to obtain current memory policy. 00:04:08.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.837 EAL: Restoring previous memory policy: 4 00:04:08.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.837 EAL: request: mp_malloc_sync 00:04:08.837 EAL: No shared files mode enabled, IPC is disabled 00:04:08.837 EAL: Heap on socket 0 was expanded by 66MB 00:04:08.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.837 EAL: request: mp_malloc_sync 00:04:08.837 EAL: No shared files mode enabled, IPC is disabled 00:04:08.837 EAL: Heap on socket 0 was shrunk by 66MB 00:04:08.837 EAL: Trying to obtain current memory policy. 00:04:08.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.837 EAL: Restoring previous memory policy: 4 00:04:08.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.837 EAL: request: mp_malloc_sync 00:04:08.837 EAL: No shared files mode enabled, IPC is disabled 00:04:08.837 EAL: Heap on socket 0 was expanded by 130MB 00:04:08.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.837 EAL: request: mp_malloc_sync 00:04:08.837 EAL: No shared files mode enabled, IPC is disabled 00:04:08.837 EAL: Heap on socket 0 was shrunk by 130MB 00:04:08.837 EAL: Trying to obtain current memory policy. 00:04:08.837 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:08.837 EAL: Restoring previous memory policy: 4 00:04:08.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.837 EAL: request: mp_malloc_sync 00:04:08.837 EAL: No shared files mode enabled, IPC is disabled 00:04:08.837 EAL: Heap on socket 0 was expanded by 258MB 00:04:08.837 EAL: Calling mem event callback 'spdk:(nil)' 00:04:08.837 EAL: request: mp_malloc_sync 00:04:08.837 EAL: No shared files mode enabled, IPC is disabled 00:04:08.837 EAL: Heap on socket 0 was shrunk by 258MB 00:04:08.837 EAL: Trying to obtain current memory policy. 00:04:08.838 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.097 EAL: Restoring previous memory policy: 4 00:04:09.097 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.097 EAL: request: mp_malloc_sync 00:04:09.097 EAL: No shared files mode enabled, IPC is disabled 00:04:09.097 EAL: Heap on socket 0 was expanded by 514MB 00:04:09.097 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.356 EAL: request: mp_malloc_sync 00:04:09.356 EAL: No shared files mode enabled, IPC is disabled 00:04:09.356 EAL: Heap on socket 0 was shrunk by 514MB 00:04:09.356 EAL: Trying to obtain current memory policy. 00:04:09.356 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.356 EAL: Restoring previous memory policy: 4 00:04:09.356 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.356 EAL: request: mp_malloc_sync 00:04:09.356 EAL: No shared files mode enabled, IPC is disabled 00:04:09.356 EAL: Heap on socket 0 was expanded by 1026MB 00:04:09.614 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.614 EAL: request: mp_malloc_sync 00:04:09.614 EAL: No shared files mode enabled, IPC is disabled 00:04:09.614 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:09.614 passed 00:04:09.614 00:04:09.614 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.614 suites 1 1 n/a 0 0 00:04:09.614 tests 2 2 2 0 0 00:04:09.614 asserts 497 497 497 0 n/a 00:04:09.614 00:04:09.614 Elapsed time = 0.973 seconds 00:04:09.614 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.614 EAL: request: mp_malloc_sync 00:04:09.614 EAL: No shared files mode enabled, IPC is disabled 00:04:09.614 EAL: Heap on socket 0 was shrunk by 2MB 00:04:09.614 EAL: No shared files mode enabled, IPC is disabled 00:04:09.614 EAL: No shared files mode enabled, IPC is disabled 00:04:09.614 EAL: No shared files mode enabled, IPC is disabled 00:04:09.873 00:04:09.873 real 0m1.115s 00:04:09.873 user 0m0.646s 00:04:09.873 sys 0m0.437s 00:04:09.874 10:21:50 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.874 10:21:50 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:09.874 ************************************ 00:04:09.874 END TEST env_vtophys 00:04:09.874 ************************************ 00:04:09.874 10:21:50 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:09.874 10:21:50 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.874 10:21:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.874 10:21:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.874 ************************************ 00:04:09.874 START TEST env_pci 00:04:09.874 ************************************ 00:04:09.874 10:21:50 env.env_pci -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:09.874 00:04:09.874 00:04:09.874 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.874 http://cunit.sourceforge.net/ 00:04:09.874 00:04:09.874 00:04:09.874 Suite: pci 00:04:09.874 Test: pci_hook ...[2024-11-20 10:21:50.429956] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 3030073 has claimed it 00:04:09.874 EAL: Cannot find device (10000:00:01.0) 00:04:09.874 EAL: Failed to attach device on primary process 00:04:09.874 passed 00:04:09.874 00:04:09.874 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.874 suites 1 1 n/a 0 0 00:04:09.874 tests 1 1 1 0 0 00:04:09.874 asserts 25 25 25 0 n/a 00:04:09.874 00:04:09.874 Elapsed time = 0.028 seconds 00:04:09.874 00:04:09.874 real 0m0.047s 00:04:09.874 user 0m0.015s 00:04:09.874 sys 0m0.032s 00:04:09.874 10:21:50 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.874 10:21:50 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:09.874 ************************************ 00:04:09.874 END TEST env_pci 00:04:09.874 ************************************ 00:04:09.874 10:21:50 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:09.874 10:21:50 env -- env/env.sh@15 -- # uname 00:04:09.874 10:21:50 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:09.874 10:21:50 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:09.874 10:21:50 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:09.874 10:21:50 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:09.874 10:21:50 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.874 10:21:50 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.874 ************************************ 00:04:09.874 START TEST env_dpdk_post_init 00:04:09.874 ************************************ 00:04:09.874 10:21:50 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:09.874 EAL: Detected CPU lcores: 96 00:04:09.874 EAL: Detected NUMA nodes: 2 00:04:09.874 EAL: Detected shared linkage of DPDK 00:04:09.874 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:09.874 EAL: Selected IOVA mode 'VA' 00:04:09.874 EAL: VFIO support initialized 00:04:09.874 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:10.133 EAL: Using IOMMU type 1 (Type 1) 00:04:10.133 EAL: Ignore mapping IO port bar(1) 00:04:10.133 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:10.133 EAL: Ignore mapping IO port bar(1) 00:04:10.133 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:10.133 EAL: Ignore mapping IO port bar(1) 00:04:10.133 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:10.133 EAL: Ignore mapping IO port bar(1) 00:04:10.133 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:10.133 EAL: Ignore mapping IO port bar(1) 00:04:10.133 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:10.133 EAL: Ignore mapping IO port bar(1) 00:04:10.133 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:10.133 EAL: Ignore mapping IO port bar(1) 00:04:10.133 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:10.133 EAL: Ignore mapping IO port bar(1) 00:04:10.133 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:11.069 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:11.069 EAL: Ignore mapping IO port bar(1) 00:04:11.069 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:11.069 EAL: Ignore mapping IO port bar(1) 00:04:11.069 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:11.069 EAL: Ignore mapping IO port bar(1) 00:04:11.069 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:11.069 EAL: Ignore mapping IO port bar(1) 00:04:11.069 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:11.069 EAL: Ignore mapping IO port bar(1) 00:04:11.069 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:11.069 EAL: Ignore mapping IO port bar(1) 00:04:11.069 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:11.069 EAL: Ignore mapping IO port bar(1) 00:04:11.069 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:11.069 EAL: Ignore mapping IO port bar(1) 00:04:11.069 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:14.355 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:14.355 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:14.923 Starting DPDK initialization... 00:04:14.923 Starting SPDK post initialization... 00:04:14.923 SPDK NVMe probe 00:04:14.923 Attaching to 0000:5e:00.0 00:04:14.923 Attached to 0000:5e:00.0 00:04:14.923 Cleaning up... 00:04:14.923 00:04:14.923 real 0m4.876s 00:04:14.923 user 0m3.417s 00:04:14.923 sys 0m0.529s 00:04:14.923 10:21:55 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.923 10:21:55 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.923 ************************************ 00:04:14.923 END TEST env_dpdk_post_init 00:04:14.923 ************************************ 00:04:14.923 10:21:55 env -- env/env.sh@26 -- # uname 00:04:14.923 10:21:55 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:14.923 10:21:55 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.923 10:21:55 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.923 10:21:55 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.923 10:21:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.923 ************************************ 00:04:14.923 START TEST env_mem_callbacks 00:04:14.923 ************************************ 00:04:14.923 10:21:55 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.923 EAL: Detected CPU lcores: 96 00:04:14.923 EAL: Detected NUMA nodes: 2 00:04:14.923 EAL: Detected shared linkage of DPDK 00:04:14.923 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:14.923 EAL: Selected IOVA mode 'VA' 00:04:14.923 EAL: VFIO support initialized 00:04:14.923 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:14.923 00:04:14.923 00:04:14.923 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.923 http://cunit.sourceforge.net/ 00:04:14.923 00:04:14.923 00:04:14.923 Suite: memory 00:04:14.923 Test: test ... 00:04:14.923 register 0x200000200000 2097152 00:04:14.923 malloc 3145728 00:04:14.923 register 0x200000400000 4194304 00:04:14.923 buf 0x200000500000 len 3145728 PASSED 00:04:14.923 malloc 64 00:04:14.923 buf 0x2000004fff40 len 64 PASSED 00:04:14.923 malloc 4194304 00:04:14.923 register 0x200000800000 6291456 00:04:14.923 buf 0x200000a00000 len 4194304 PASSED 00:04:14.923 free 0x200000500000 3145728 00:04:14.923 free 0x2000004fff40 64 00:04:14.923 unregister 0x200000400000 4194304 PASSED 00:04:14.923 free 0x200000a00000 4194304 00:04:14.923 unregister 0x200000800000 6291456 PASSED 00:04:14.923 malloc 8388608 00:04:14.923 register 0x200000400000 10485760 00:04:14.923 buf 0x200000600000 len 8388608 PASSED 00:04:14.923 free 0x200000600000 8388608 00:04:14.923 unregister 0x200000400000 10485760 PASSED 00:04:14.923 passed 00:04:14.923 00:04:14.923 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.923 suites 1 1 n/a 0 0 00:04:14.923 tests 1 1 1 0 0 00:04:14.923 asserts 15 15 15 0 n/a 00:04:14.923 00:04:14.923 Elapsed time = 0.008 seconds 00:04:14.923 00:04:14.923 real 0m0.057s 00:04:14.923 user 0m0.021s 00:04:14.923 sys 0m0.036s 00:04:14.923 10:21:55 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.923 10:21:55 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:14.923 ************************************ 00:04:14.923 END TEST env_mem_callbacks 00:04:14.923 ************************************ 00:04:14.923 00:04:14.923 real 0m6.769s 00:04:14.923 user 0m4.481s 00:04:14.923 sys 0m1.360s 00:04:14.923 10:21:55 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.923 10:21:55 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.923 ************************************ 00:04:14.923 END TEST env 00:04:14.923 ************************************ 00:04:14.923 10:21:55 -- spdk/autotest.sh@156 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:14.923 10:21:55 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.923 10:21:55 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.923 10:21:55 -- common/autotest_common.sh@10 -- # set +x 00:04:15.182 ************************************ 00:04:15.182 START TEST rpc 00:04:15.182 ************************************ 00:04:15.182 10:21:55 rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:15.182 * Looking for test storage... 00:04:15.182 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:15.182 10:21:55 rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:15.182 10:21:55 rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:15.182 10:21:55 rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:15.182 10:21:55 rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:15.182 10:21:55 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.182 10:21:55 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.182 10:21:55 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.182 10:21:55 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.182 10:21:55 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.182 10:21:55 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.182 10:21:55 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.182 10:21:55 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.182 10:21:55 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.182 10:21:55 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.182 10:21:55 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.182 10:21:55 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:15.182 10:21:55 rpc -- scripts/common.sh@345 -- # : 1 00:04:15.182 10:21:55 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.182 10:21:55 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.182 10:21:55 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:15.182 10:21:55 rpc -- scripts/common.sh@353 -- # local d=1 00:04:15.182 10:21:55 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.182 10:21:55 rpc -- scripts/common.sh@355 -- # echo 1 00:04:15.182 10:21:55 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.182 10:21:55 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:15.182 10:21:55 rpc -- scripts/common.sh@353 -- # local d=2 00:04:15.182 10:21:55 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.182 10:21:55 rpc -- scripts/common.sh@355 -- # echo 2 00:04:15.182 10:21:55 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.182 10:21:55 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.182 10:21:55 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.182 10:21:55 rpc -- scripts/common.sh@368 -- # return 0 00:04:15.182 10:21:55 rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.182 10:21:55 rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:15.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.182 --rc genhtml_branch_coverage=1 00:04:15.182 --rc genhtml_function_coverage=1 00:04:15.182 --rc genhtml_legend=1 00:04:15.182 --rc geninfo_all_blocks=1 00:04:15.182 --rc geninfo_unexecuted_blocks=1 00:04:15.182 00:04:15.182 ' 00:04:15.182 10:21:55 rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:15.182 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.182 --rc genhtml_branch_coverage=1 00:04:15.182 --rc genhtml_function_coverage=1 00:04:15.182 --rc genhtml_legend=1 00:04:15.183 --rc geninfo_all_blocks=1 00:04:15.183 --rc geninfo_unexecuted_blocks=1 00:04:15.183 00:04:15.183 ' 00:04:15.183 10:21:55 rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:15.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.183 --rc genhtml_branch_coverage=1 00:04:15.183 --rc genhtml_function_coverage=1 00:04:15.183 --rc genhtml_legend=1 00:04:15.183 --rc geninfo_all_blocks=1 00:04:15.183 --rc geninfo_unexecuted_blocks=1 00:04:15.183 00:04:15.183 ' 00:04:15.183 10:21:55 rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:15.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.183 --rc genhtml_branch_coverage=1 00:04:15.183 --rc genhtml_function_coverage=1 00:04:15.183 --rc genhtml_legend=1 00:04:15.183 --rc geninfo_all_blocks=1 00:04:15.183 --rc geninfo_unexecuted_blocks=1 00:04:15.183 00:04:15.183 ' 00:04:15.183 10:21:55 rpc -- rpc/rpc.sh@65 -- # spdk_pid=3031120 00:04:15.183 10:21:55 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.183 10:21:55 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:15.183 10:21:55 rpc -- rpc/rpc.sh@67 -- # waitforlisten 3031120 00:04:15.183 10:21:55 rpc -- common/autotest_common.sh@835 -- # '[' -z 3031120 ']' 00:04:15.183 10:21:55 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.183 10:21:55 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.183 10:21:55 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.183 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.183 10:21:55 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.183 10:21:55 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.183 [2024-11-20 10:21:55.884241] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:15.183 [2024-11-20 10:21:55.884287] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3031120 ] 00:04:15.442 [2024-11-20 10:21:55.958235] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.442 [2024-11-20 10:21:55.997233] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:15.442 [2024-11-20 10:21:55.997269] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 3031120' to capture a snapshot of events at runtime. 00:04:15.442 [2024-11-20 10:21:55.997277] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:15.442 [2024-11-20 10:21:55.997283] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:15.442 [2024-11-20 10:21:55.997288] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid3031120 for offline analysis/debug. 00:04:15.442 [2024-11-20 10:21:55.997852] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:15.701 10:21:56 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:15.701 10:21:56 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:15.701 10:21:56 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:15.701 10:21:56 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:15.701 10:21:56 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:15.701 10:21:56 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:15.701 10:21:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.701 10:21:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.701 10:21:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.701 ************************************ 00:04:15.701 START TEST rpc_integrity 00:04:15.701 ************************************ 00:04:15.701 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:15.701 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:15.701 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.701 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.701 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.701 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:15.701 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:15.701 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:15.701 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:15.701 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.701 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.701 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.701 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:15.701 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:15.701 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.701 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.701 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.701 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:15.701 { 00:04:15.701 "name": "Malloc0", 00:04:15.701 "aliases": [ 00:04:15.701 "c803770a-d8fe-4e54-b59a-1d34efb5db0c" 00:04:15.701 ], 00:04:15.701 "product_name": "Malloc disk", 00:04:15.701 "block_size": 512, 00:04:15.701 "num_blocks": 16384, 00:04:15.701 "uuid": "c803770a-d8fe-4e54-b59a-1d34efb5db0c", 00:04:15.701 "assigned_rate_limits": { 00:04:15.701 "rw_ios_per_sec": 0, 00:04:15.701 "rw_mbytes_per_sec": 0, 00:04:15.701 "r_mbytes_per_sec": 0, 00:04:15.701 "w_mbytes_per_sec": 0 00:04:15.701 }, 00:04:15.701 "claimed": false, 00:04:15.701 "zoned": false, 00:04:15.701 "supported_io_types": { 00:04:15.701 "read": true, 00:04:15.701 "write": true, 00:04:15.701 "unmap": true, 00:04:15.701 "flush": true, 00:04:15.701 "reset": true, 00:04:15.701 "nvme_admin": false, 00:04:15.701 "nvme_io": false, 00:04:15.701 "nvme_io_md": false, 00:04:15.701 "write_zeroes": true, 00:04:15.701 "zcopy": true, 00:04:15.702 "get_zone_info": false, 00:04:15.702 "zone_management": false, 00:04:15.702 "zone_append": false, 00:04:15.702 "compare": false, 00:04:15.702 "compare_and_write": false, 00:04:15.702 "abort": true, 00:04:15.702 "seek_hole": false, 00:04:15.702 "seek_data": false, 00:04:15.702 "copy": true, 00:04:15.702 "nvme_iov_md": false 00:04:15.702 }, 00:04:15.702 "memory_domains": [ 00:04:15.702 { 00:04:15.702 "dma_device_id": "system", 00:04:15.702 "dma_device_type": 1 00:04:15.702 }, 00:04:15.702 { 00:04:15.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.702 "dma_device_type": 2 00:04:15.702 } 00:04:15.702 ], 00:04:15.702 "driver_specific": {} 00:04:15.702 } 00:04:15.702 ]' 00:04:15.702 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:15.702 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:15.702 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:15.702 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.702 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.702 [2024-11-20 10:21:56.381033] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:15.702 [2024-11-20 10:21:56.381060] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:15.702 [2024-11-20 10:21:56.381072] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x21b26e0 00:04:15.702 [2024-11-20 10:21:56.381078] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:15.702 [2024-11-20 10:21:56.382148] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:15.702 [2024-11-20 10:21:56.382168] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:15.702 Passthru0 00:04:15.702 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.702 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:15.702 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.702 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.702 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.702 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:15.702 { 00:04:15.702 "name": "Malloc0", 00:04:15.702 "aliases": [ 00:04:15.702 "c803770a-d8fe-4e54-b59a-1d34efb5db0c" 00:04:15.702 ], 00:04:15.702 "product_name": "Malloc disk", 00:04:15.702 "block_size": 512, 00:04:15.702 "num_blocks": 16384, 00:04:15.702 "uuid": "c803770a-d8fe-4e54-b59a-1d34efb5db0c", 00:04:15.702 "assigned_rate_limits": { 00:04:15.702 "rw_ios_per_sec": 0, 00:04:15.702 "rw_mbytes_per_sec": 0, 00:04:15.702 "r_mbytes_per_sec": 0, 00:04:15.702 "w_mbytes_per_sec": 0 00:04:15.702 }, 00:04:15.702 "claimed": true, 00:04:15.702 "claim_type": "exclusive_write", 00:04:15.702 "zoned": false, 00:04:15.702 "supported_io_types": { 00:04:15.702 "read": true, 00:04:15.702 "write": true, 00:04:15.702 "unmap": true, 00:04:15.702 "flush": true, 00:04:15.702 "reset": true, 00:04:15.702 "nvme_admin": false, 00:04:15.702 "nvme_io": false, 00:04:15.702 "nvme_io_md": false, 00:04:15.702 "write_zeroes": true, 00:04:15.702 "zcopy": true, 00:04:15.702 "get_zone_info": false, 00:04:15.702 "zone_management": false, 00:04:15.702 "zone_append": false, 00:04:15.702 "compare": false, 00:04:15.702 "compare_and_write": false, 00:04:15.702 "abort": true, 00:04:15.702 "seek_hole": false, 00:04:15.702 "seek_data": false, 00:04:15.702 "copy": true, 00:04:15.702 "nvme_iov_md": false 00:04:15.702 }, 00:04:15.702 "memory_domains": [ 00:04:15.702 { 00:04:15.702 "dma_device_id": "system", 00:04:15.702 "dma_device_type": 1 00:04:15.702 }, 00:04:15.702 { 00:04:15.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.702 "dma_device_type": 2 00:04:15.702 } 00:04:15.702 ], 00:04:15.702 "driver_specific": {} 00:04:15.702 }, 00:04:15.702 { 00:04:15.702 "name": "Passthru0", 00:04:15.702 "aliases": [ 00:04:15.702 "ef75d507-8f9b-5c7f-abba-215ab5b0d69f" 00:04:15.702 ], 00:04:15.702 "product_name": "passthru", 00:04:15.702 "block_size": 512, 00:04:15.702 "num_blocks": 16384, 00:04:15.702 "uuid": "ef75d507-8f9b-5c7f-abba-215ab5b0d69f", 00:04:15.702 "assigned_rate_limits": { 00:04:15.702 "rw_ios_per_sec": 0, 00:04:15.702 "rw_mbytes_per_sec": 0, 00:04:15.702 "r_mbytes_per_sec": 0, 00:04:15.702 "w_mbytes_per_sec": 0 00:04:15.702 }, 00:04:15.702 "claimed": false, 00:04:15.702 "zoned": false, 00:04:15.702 "supported_io_types": { 00:04:15.702 "read": true, 00:04:15.702 "write": true, 00:04:15.702 "unmap": true, 00:04:15.702 "flush": true, 00:04:15.702 "reset": true, 00:04:15.702 "nvme_admin": false, 00:04:15.702 "nvme_io": false, 00:04:15.702 "nvme_io_md": false, 00:04:15.702 "write_zeroes": true, 00:04:15.702 "zcopy": true, 00:04:15.702 "get_zone_info": false, 00:04:15.702 "zone_management": false, 00:04:15.702 "zone_append": false, 00:04:15.702 "compare": false, 00:04:15.702 "compare_and_write": false, 00:04:15.702 "abort": true, 00:04:15.702 "seek_hole": false, 00:04:15.702 "seek_data": false, 00:04:15.702 "copy": true, 00:04:15.702 "nvme_iov_md": false 00:04:15.702 }, 00:04:15.702 "memory_domains": [ 00:04:15.702 { 00:04:15.702 "dma_device_id": "system", 00:04:15.702 "dma_device_type": 1 00:04:15.702 }, 00:04:15.702 { 00:04:15.702 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.702 "dma_device_type": 2 00:04:15.702 } 00:04:15.702 ], 00:04:15.702 "driver_specific": { 00:04:15.702 "passthru": { 00:04:15.702 "name": "Passthru0", 00:04:15.702 "base_bdev_name": "Malloc0" 00:04:15.702 } 00:04:15.702 } 00:04:15.702 } 00:04:15.702 ]' 00:04:15.702 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:15.961 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:15.961 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:15.961 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.961 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.961 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.961 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:15.961 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.961 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.961 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.961 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:15.961 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.961 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.961 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.961 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:15.961 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:15.961 10:21:56 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:15.961 00:04:15.961 real 0m0.281s 00:04:15.961 user 0m0.181s 00:04:15.961 sys 0m0.038s 00:04:15.961 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.961 10:21:56 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:15.961 ************************************ 00:04:15.961 END TEST rpc_integrity 00:04:15.961 ************************************ 00:04:15.961 10:21:56 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:15.961 10:21:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.961 10:21:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.961 10:21:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.961 ************************************ 00:04:15.961 START TEST rpc_plugins 00:04:15.961 ************************************ 00:04:15.961 10:21:56 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:15.961 10:21:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:15.961 10:21:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.961 10:21:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.961 10:21:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.961 10:21:56 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:15.961 10:21:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:15.961 10:21:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.961 10:21:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.961 10:21:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.961 10:21:56 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:15.961 { 00:04:15.961 "name": "Malloc1", 00:04:15.961 "aliases": [ 00:04:15.961 "d7be78ec-19a8-4ac8-be30-c87695682084" 00:04:15.961 ], 00:04:15.961 "product_name": "Malloc disk", 00:04:15.961 "block_size": 4096, 00:04:15.961 "num_blocks": 256, 00:04:15.961 "uuid": "d7be78ec-19a8-4ac8-be30-c87695682084", 00:04:15.961 "assigned_rate_limits": { 00:04:15.961 "rw_ios_per_sec": 0, 00:04:15.961 "rw_mbytes_per_sec": 0, 00:04:15.961 "r_mbytes_per_sec": 0, 00:04:15.961 "w_mbytes_per_sec": 0 00:04:15.961 }, 00:04:15.961 "claimed": false, 00:04:15.961 "zoned": false, 00:04:15.961 "supported_io_types": { 00:04:15.961 "read": true, 00:04:15.961 "write": true, 00:04:15.961 "unmap": true, 00:04:15.961 "flush": true, 00:04:15.961 "reset": true, 00:04:15.961 "nvme_admin": false, 00:04:15.961 "nvme_io": false, 00:04:15.961 "nvme_io_md": false, 00:04:15.961 "write_zeroes": true, 00:04:15.961 "zcopy": true, 00:04:15.961 "get_zone_info": false, 00:04:15.961 "zone_management": false, 00:04:15.961 "zone_append": false, 00:04:15.962 "compare": false, 00:04:15.962 "compare_and_write": false, 00:04:15.962 "abort": true, 00:04:15.962 "seek_hole": false, 00:04:15.962 "seek_data": false, 00:04:15.962 "copy": true, 00:04:15.962 "nvme_iov_md": false 00:04:15.962 }, 00:04:15.962 "memory_domains": [ 00:04:15.962 { 00:04:15.962 "dma_device_id": "system", 00:04:15.962 "dma_device_type": 1 00:04:15.962 }, 00:04:15.962 { 00:04:15.962 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:15.962 "dma_device_type": 2 00:04:15.962 } 00:04:15.962 ], 00:04:15.962 "driver_specific": {} 00:04:15.962 } 00:04:15.962 ]' 00:04:15.962 10:21:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:15.962 10:21:56 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:15.962 10:21:56 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:15.962 10:21:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.962 10:21:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:15.962 10:21:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:15.962 10:21:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:15.962 10:21:56 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:15.962 10:21:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.220 10:21:56 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.221 10:21:56 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:16.221 10:21:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:16.221 10:21:56 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:16.221 00:04:16.221 real 0m0.143s 00:04:16.221 user 0m0.094s 00:04:16.221 sys 0m0.015s 00:04:16.221 10:21:56 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.221 10:21:56 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.221 ************************************ 00:04:16.221 END TEST rpc_plugins 00:04:16.221 ************************************ 00:04:16.221 10:21:56 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:16.221 10:21:56 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.221 10:21:56 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.221 10:21:56 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.221 ************************************ 00:04:16.221 START TEST rpc_trace_cmd_test 00:04:16.221 ************************************ 00:04:16.221 10:21:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:16.221 10:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:16.221 10:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:16.221 10:21:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.221 10:21:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.221 10:21:56 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.221 10:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:16.221 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid3031120", 00:04:16.221 "tpoint_group_mask": "0x8", 00:04:16.221 "iscsi_conn": { 00:04:16.221 "mask": "0x2", 00:04:16.221 "tpoint_mask": "0x0" 00:04:16.221 }, 00:04:16.221 "scsi": { 00:04:16.221 "mask": "0x4", 00:04:16.221 "tpoint_mask": "0x0" 00:04:16.221 }, 00:04:16.221 "bdev": { 00:04:16.221 "mask": "0x8", 00:04:16.221 "tpoint_mask": "0xffffffffffffffff" 00:04:16.221 }, 00:04:16.221 "nvmf_rdma": { 00:04:16.221 "mask": "0x10", 00:04:16.221 "tpoint_mask": "0x0" 00:04:16.221 }, 00:04:16.221 "nvmf_tcp": { 00:04:16.221 "mask": "0x20", 00:04:16.221 "tpoint_mask": "0x0" 00:04:16.221 }, 00:04:16.221 "ftl": { 00:04:16.221 "mask": "0x40", 00:04:16.221 "tpoint_mask": "0x0" 00:04:16.221 }, 00:04:16.221 "blobfs": { 00:04:16.221 "mask": "0x80", 00:04:16.221 "tpoint_mask": "0x0" 00:04:16.221 }, 00:04:16.221 "dsa": { 00:04:16.221 "mask": "0x200", 00:04:16.221 "tpoint_mask": "0x0" 00:04:16.221 }, 00:04:16.221 "thread": { 00:04:16.221 "mask": "0x400", 00:04:16.221 "tpoint_mask": "0x0" 00:04:16.221 }, 00:04:16.221 "nvme_pcie": { 00:04:16.221 "mask": "0x800", 00:04:16.221 "tpoint_mask": "0x0" 00:04:16.221 }, 00:04:16.221 "iaa": { 00:04:16.221 "mask": "0x1000", 00:04:16.221 "tpoint_mask": "0x0" 00:04:16.221 }, 00:04:16.221 "nvme_tcp": { 00:04:16.221 "mask": "0x2000", 00:04:16.221 "tpoint_mask": "0x0" 00:04:16.221 }, 00:04:16.221 "bdev_nvme": { 00:04:16.221 "mask": "0x4000", 00:04:16.221 "tpoint_mask": "0x0" 00:04:16.221 }, 00:04:16.221 "sock": { 00:04:16.221 "mask": "0x8000", 00:04:16.221 "tpoint_mask": "0x0" 00:04:16.221 }, 00:04:16.221 "blob": { 00:04:16.221 "mask": "0x10000", 00:04:16.221 "tpoint_mask": "0x0" 00:04:16.221 }, 00:04:16.221 "bdev_raid": { 00:04:16.221 "mask": "0x20000", 00:04:16.221 "tpoint_mask": "0x0" 00:04:16.221 }, 00:04:16.221 "scheduler": { 00:04:16.221 "mask": "0x40000", 00:04:16.221 "tpoint_mask": "0x0" 00:04:16.221 } 00:04:16.221 }' 00:04:16.221 10:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:16.221 10:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:16.221 10:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:16.221 10:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:16.221 10:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:16.480 10:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:16.480 10:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:16.480 10:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:16.480 10:21:56 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:16.480 10:21:57 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:16.480 00:04:16.480 real 0m0.229s 00:04:16.480 user 0m0.195s 00:04:16.480 sys 0m0.024s 00:04:16.480 10:21:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.480 10:21:57 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.480 ************************************ 00:04:16.480 END TEST rpc_trace_cmd_test 00:04:16.480 ************************************ 00:04:16.480 10:21:57 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:16.480 10:21:57 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:16.480 10:21:57 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:16.480 10:21:57 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.480 10:21:57 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.480 10:21:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.480 ************************************ 00:04:16.480 START TEST rpc_daemon_integrity 00:04:16.480 ************************************ 00:04:16.480 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:16.480 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.480 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.480 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.480 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.480 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.480 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.480 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.480 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.480 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.481 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.481 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.481 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:16.481 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.481 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.481 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.481 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.481 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:16.481 { 00:04:16.481 "name": "Malloc2", 00:04:16.481 "aliases": [ 00:04:16.481 "f3b8630e-9af1-4a4d-a426-ea81519cf768" 00:04:16.481 ], 00:04:16.481 "product_name": "Malloc disk", 00:04:16.481 "block_size": 512, 00:04:16.481 "num_blocks": 16384, 00:04:16.481 "uuid": "f3b8630e-9af1-4a4d-a426-ea81519cf768", 00:04:16.481 "assigned_rate_limits": { 00:04:16.481 "rw_ios_per_sec": 0, 00:04:16.481 "rw_mbytes_per_sec": 0, 00:04:16.481 "r_mbytes_per_sec": 0, 00:04:16.481 "w_mbytes_per_sec": 0 00:04:16.481 }, 00:04:16.481 "claimed": false, 00:04:16.481 "zoned": false, 00:04:16.481 "supported_io_types": { 00:04:16.481 "read": true, 00:04:16.481 "write": true, 00:04:16.481 "unmap": true, 00:04:16.481 "flush": true, 00:04:16.481 "reset": true, 00:04:16.481 "nvme_admin": false, 00:04:16.481 "nvme_io": false, 00:04:16.481 "nvme_io_md": false, 00:04:16.481 "write_zeroes": true, 00:04:16.481 "zcopy": true, 00:04:16.481 "get_zone_info": false, 00:04:16.481 "zone_management": false, 00:04:16.481 "zone_append": false, 00:04:16.481 "compare": false, 00:04:16.481 "compare_and_write": false, 00:04:16.481 "abort": true, 00:04:16.481 "seek_hole": false, 00:04:16.481 "seek_data": false, 00:04:16.481 "copy": true, 00:04:16.481 "nvme_iov_md": false 00:04:16.481 }, 00:04:16.481 "memory_domains": [ 00:04:16.481 { 00:04:16.481 "dma_device_id": "system", 00:04:16.481 "dma_device_type": 1 00:04:16.481 }, 00:04:16.481 { 00:04:16.481 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.481 "dma_device_type": 2 00:04:16.481 } 00:04:16.481 ], 00:04:16.481 "driver_specific": {} 00:04:16.481 } 00:04:16.481 ]' 00:04:16.481 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:16.739 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:16.739 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:16.739 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.739 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.739 [2024-11-20 10:21:57.239368] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:16.739 [2024-11-20 10:21:57.239395] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:16.739 [2024-11-20 10:21:57.239407] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x2242b70 00:04:16.739 [2024-11-20 10:21:57.239414] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:16.739 [2024-11-20 10:21:57.240375] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:16.739 [2024-11-20 10:21:57.240397] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:16.739 Passthru0 00:04:16.739 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.739 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:16.740 { 00:04:16.740 "name": "Malloc2", 00:04:16.740 "aliases": [ 00:04:16.740 "f3b8630e-9af1-4a4d-a426-ea81519cf768" 00:04:16.740 ], 00:04:16.740 "product_name": "Malloc disk", 00:04:16.740 "block_size": 512, 00:04:16.740 "num_blocks": 16384, 00:04:16.740 "uuid": "f3b8630e-9af1-4a4d-a426-ea81519cf768", 00:04:16.740 "assigned_rate_limits": { 00:04:16.740 "rw_ios_per_sec": 0, 00:04:16.740 "rw_mbytes_per_sec": 0, 00:04:16.740 "r_mbytes_per_sec": 0, 00:04:16.740 "w_mbytes_per_sec": 0 00:04:16.740 }, 00:04:16.740 "claimed": true, 00:04:16.740 "claim_type": "exclusive_write", 00:04:16.740 "zoned": false, 00:04:16.740 "supported_io_types": { 00:04:16.740 "read": true, 00:04:16.740 "write": true, 00:04:16.740 "unmap": true, 00:04:16.740 "flush": true, 00:04:16.740 "reset": true, 00:04:16.740 "nvme_admin": false, 00:04:16.740 "nvme_io": false, 00:04:16.740 "nvme_io_md": false, 00:04:16.740 "write_zeroes": true, 00:04:16.740 "zcopy": true, 00:04:16.740 "get_zone_info": false, 00:04:16.740 "zone_management": false, 00:04:16.740 "zone_append": false, 00:04:16.740 "compare": false, 00:04:16.740 "compare_and_write": false, 00:04:16.740 "abort": true, 00:04:16.740 "seek_hole": false, 00:04:16.740 "seek_data": false, 00:04:16.740 "copy": true, 00:04:16.740 "nvme_iov_md": false 00:04:16.740 }, 00:04:16.740 "memory_domains": [ 00:04:16.740 { 00:04:16.740 "dma_device_id": "system", 00:04:16.740 "dma_device_type": 1 00:04:16.740 }, 00:04:16.740 { 00:04:16.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.740 "dma_device_type": 2 00:04:16.740 } 00:04:16.740 ], 00:04:16.740 "driver_specific": {} 00:04:16.740 }, 00:04:16.740 { 00:04:16.740 "name": "Passthru0", 00:04:16.740 "aliases": [ 00:04:16.740 "1d8eb0d7-1076-5df2-83ed-f3dcafe82103" 00:04:16.740 ], 00:04:16.740 "product_name": "passthru", 00:04:16.740 "block_size": 512, 00:04:16.740 "num_blocks": 16384, 00:04:16.740 "uuid": "1d8eb0d7-1076-5df2-83ed-f3dcafe82103", 00:04:16.740 "assigned_rate_limits": { 00:04:16.740 "rw_ios_per_sec": 0, 00:04:16.740 "rw_mbytes_per_sec": 0, 00:04:16.740 "r_mbytes_per_sec": 0, 00:04:16.740 "w_mbytes_per_sec": 0 00:04:16.740 }, 00:04:16.740 "claimed": false, 00:04:16.740 "zoned": false, 00:04:16.740 "supported_io_types": { 00:04:16.740 "read": true, 00:04:16.740 "write": true, 00:04:16.740 "unmap": true, 00:04:16.740 "flush": true, 00:04:16.740 "reset": true, 00:04:16.740 "nvme_admin": false, 00:04:16.740 "nvme_io": false, 00:04:16.740 "nvme_io_md": false, 00:04:16.740 "write_zeroes": true, 00:04:16.740 "zcopy": true, 00:04:16.740 "get_zone_info": false, 00:04:16.740 "zone_management": false, 00:04:16.740 "zone_append": false, 00:04:16.740 "compare": false, 00:04:16.740 "compare_and_write": false, 00:04:16.740 "abort": true, 00:04:16.740 "seek_hole": false, 00:04:16.740 "seek_data": false, 00:04:16.740 "copy": true, 00:04:16.740 "nvme_iov_md": false 00:04:16.740 }, 00:04:16.740 "memory_domains": [ 00:04:16.740 { 00:04:16.740 "dma_device_id": "system", 00:04:16.740 "dma_device_type": 1 00:04:16.740 }, 00:04:16.740 { 00:04:16.740 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.740 "dma_device_type": 2 00:04:16.740 } 00:04:16.740 ], 00:04:16.740 "driver_specific": { 00:04:16.740 "passthru": { 00:04:16.740 "name": "Passthru0", 00:04:16.740 "base_bdev_name": "Malloc2" 00:04:16.740 } 00:04:16.740 } 00:04:16.740 } 00:04:16.740 ]' 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.740 00:04:16.740 real 0m0.281s 00:04:16.740 user 0m0.174s 00:04:16.740 sys 0m0.042s 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.740 10:21:57 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.740 ************************************ 00:04:16.740 END TEST rpc_daemon_integrity 00:04:16.740 ************************************ 00:04:16.740 10:21:57 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:16.740 10:21:57 rpc -- rpc/rpc.sh@84 -- # killprocess 3031120 00:04:16.740 10:21:57 rpc -- common/autotest_common.sh@954 -- # '[' -z 3031120 ']' 00:04:16.740 10:21:57 rpc -- common/autotest_common.sh@958 -- # kill -0 3031120 00:04:16.740 10:21:57 rpc -- common/autotest_common.sh@959 -- # uname 00:04:16.740 10:21:57 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:16.740 10:21:57 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3031120 00:04:16.999 10:21:57 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:16.999 10:21:57 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:16.999 10:21:57 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3031120' 00:04:16.999 killing process with pid 3031120 00:04:16.999 10:21:57 rpc -- common/autotest_common.sh@973 -- # kill 3031120 00:04:16.999 10:21:57 rpc -- common/autotest_common.sh@978 -- # wait 3031120 00:04:17.258 00:04:17.258 real 0m2.107s 00:04:17.258 user 0m2.710s 00:04:17.258 sys 0m0.691s 00:04:17.258 10:21:57 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.258 10:21:57 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.258 ************************************ 00:04:17.258 END TEST rpc 00:04:17.258 ************************************ 00:04:17.258 10:21:57 -- spdk/autotest.sh@157 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:17.258 10:21:57 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.258 10:21:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.258 10:21:57 -- common/autotest_common.sh@10 -- # set +x 00:04:17.258 ************************************ 00:04:17.258 START TEST skip_rpc 00:04:17.258 ************************************ 00:04:17.258 10:21:57 skip_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:17.258 * Looking for test storage... 00:04:17.258 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:17.258 10:21:57 skip_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:17.258 10:21:57 skip_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:17.258 10:21:57 skip_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:17.518 10:21:57 skip_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:17.518 10:21:57 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:17.518 10:21:57 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:17.518 10:21:57 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:17.518 10:21:57 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:17.518 10:21:57 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:17.518 10:21:57 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:17.518 10:21:57 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:17.518 10:21:57 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:17.518 10:21:57 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:17.518 10:21:57 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:17.518 10:21:57 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:17.518 10:21:57 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:17.518 10:21:57 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:17.518 10:21:57 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:17.518 10:21:57 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:17.518 10:21:58 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:17.518 10:21:58 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:17.518 10:21:58 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:17.518 10:21:58 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:17.518 10:21:58 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:17.518 10:21:58 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:17.518 10:21:58 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:17.518 10:21:58 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:17.518 10:21:58 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:17.518 10:21:58 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:17.518 10:21:58 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:17.518 10:21:58 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:17.518 10:21:58 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:17.518 10:21:58 skip_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:17.518 10:21:58 skip_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:17.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.518 --rc genhtml_branch_coverage=1 00:04:17.518 --rc genhtml_function_coverage=1 00:04:17.518 --rc genhtml_legend=1 00:04:17.518 --rc geninfo_all_blocks=1 00:04:17.518 --rc geninfo_unexecuted_blocks=1 00:04:17.518 00:04:17.518 ' 00:04:17.518 10:21:58 skip_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:17.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.518 --rc genhtml_branch_coverage=1 00:04:17.518 --rc genhtml_function_coverage=1 00:04:17.518 --rc genhtml_legend=1 00:04:17.518 --rc geninfo_all_blocks=1 00:04:17.518 --rc geninfo_unexecuted_blocks=1 00:04:17.518 00:04:17.518 ' 00:04:17.518 10:21:58 skip_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:17.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.518 --rc genhtml_branch_coverage=1 00:04:17.518 --rc genhtml_function_coverage=1 00:04:17.518 --rc genhtml_legend=1 00:04:17.518 --rc geninfo_all_blocks=1 00:04:17.518 --rc geninfo_unexecuted_blocks=1 00:04:17.518 00:04:17.518 ' 00:04:17.518 10:21:58 skip_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:17.518 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:17.518 --rc genhtml_branch_coverage=1 00:04:17.518 --rc genhtml_function_coverage=1 00:04:17.518 --rc genhtml_legend=1 00:04:17.518 --rc geninfo_all_blocks=1 00:04:17.518 --rc geninfo_unexecuted_blocks=1 00:04:17.518 00:04:17.518 ' 00:04:17.518 10:21:58 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:17.518 10:21:58 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:17.518 10:21:58 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:17.518 10:21:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.518 10:21:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.518 10:21:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.518 ************************************ 00:04:17.518 START TEST skip_rpc 00:04:17.518 ************************************ 00:04:17.518 10:21:58 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:17.518 10:21:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=3031630 00:04:17.518 10:21:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:17.518 10:21:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:17.518 10:21:58 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:17.518 [2024-11-20 10:21:58.103752] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:17.518 [2024-11-20 10:21:58.103790] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3031630 ] 00:04:17.518 [2024-11-20 10:21:58.175214] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:17.518 [2024-11-20 10:21:58.214995] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 3031630 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 3031630 ']' 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 3031630 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3031630 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3031630' 00:04:22.847 killing process with pid 3031630 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 3031630 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 3031630 00:04:22.847 00:04:22.847 real 0m5.367s 00:04:22.847 user 0m5.125s 00:04:22.847 sys 0m0.282s 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:22.847 10:22:03 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.847 ************************************ 00:04:22.847 END TEST skip_rpc 00:04:22.847 ************************************ 00:04:22.847 10:22:03 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:22.847 10:22:03 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:22.847 10:22:03 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:22.847 10:22:03 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:22.847 ************************************ 00:04:22.847 START TEST skip_rpc_with_json 00:04:22.847 ************************************ 00:04:22.847 10:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:22.847 10:22:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:22.847 10:22:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=3032553 00:04:22.847 10:22:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:22.847 10:22:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:22.847 10:22:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 3032553 00:04:22.847 10:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 3032553 ']' 00:04:22.847 10:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:22.847 10:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:22.847 10:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:22.847 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:22.847 10:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:22.847 10:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:22.847 [2024-11-20 10:22:03.540640] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:22.847 [2024-11-20 10:22:03.540681] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3032553 ] 00:04:23.105 [2024-11-20 10:22:03.616207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:23.105 [2024-11-20 10:22:03.657943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:23.363 10:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:23.364 10:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:23.364 10:22:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:23.364 10:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.364 10:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.364 [2024-11-20 10:22:03.870149] nvmf_rpc.c:2703:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:23.364 request: 00:04:23.364 { 00:04:23.364 "trtype": "tcp", 00:04:23.364 "method": "nvmf_get_transports", 00:04:23.364 "req_id": 1 00:04:23.364 } 00:04:23.364 Got JSON-RPC error response 00:04:23.364 response: 00:04:23.364 { 00:04:23.364 "code": -19, 00:04:23.364 "message": "No such device" 00:04:23.364 } 00:04:23.364 10:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:23.364 10:22:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:23.364 10:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.364 10:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.364 [2024-11-20 10:22:03.882259] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:23.364 10:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.364 10:22:03 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:23.364 10:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:23.364 10:22:03 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:23.364 10:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:23.364 10:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:23.364 { 00:04:23.364 "subsystems": [ 00:04:23.364 { 00:04:23.364 "subsystem": "fsdev", 00:04:23.364 "config": [ 00:04:23.364 { 00:04:23.364 "method": "fsdev_set_opts", 00:04:23.364 "params": { 00:04:23.364 "fsdev_io_pool_size": 65535, 00:04:23.364 "fsdev_io_cache_size": 256 00:04:23.364 } 00:04:23.364 } 00:04:23.364 ] 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "subsystem": "vfio_user_target", 00:04:23.364 "config": null 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "subsystem": "keyring", 00:04:23.364 "config": [] 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "subsystem": "iobuf", 00:04:23.364 "config": [ 00:04:23.364 { 00:04:23.364 "method": "iobuf_set_options", 00:04:23.364 "params": { 00:04:23.364 "small_pool_count": 8192, 00:04:23.364 "large_pool_count": 1024, 00:04:23.364 "small_bufsize": 8192, 00:04:23.364 "large_bufsize": 135168, 00:04:23.364 "enable_numa": false 00:04:23.364 } 00:04:23.364 } 00:04:23.364 ] 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "subsystem": "sock", 00:04:23.364 "config": [ 00:04:23.364 { 00:04:23.364 "method": "sock_set_default_impl", 00:04:23.364 "params": { 00:04:23.364 "impl_name": "posix" 00:04:23.364 } 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "method": "sock_impl_set_options", 00:04:23.364 "params": { 00:04:23.364 "impl_name": "ssl", 00:04:23.364 "recv_buf_size": 4096, 00:04:23.364 "send_buf_size": 4096, 00:04:23.364 "enable_recv_pipe": true, 00:04:23.364 "enable_quickack": false, 00:04:23.364 "enable_placement_id": 0, 00:04:23.364 "enable_zerocopy_send_server": true, 00:04:23.364 "enable_zerocopy_send_client": false, 00:04:23.364 "zerocopy_threshold": 0, 00:04:23.364 "tls_version": 0, 00:04:23.364 "enable_ktls": false 00:04:23.364 } 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "method": "sock_impl_set_options", 00:04:23.364 "params": { 00:04:23.364 "impl_name": "posix", 00:04:23.364 "recv_buf_size": 2097152, 00:04:23.364 "send_buf_size": 2097152, 00:04:23.364 "enable_recv_pipe": true, 00:04:23.364 "enable_quickack": false, 00:04:23.364 "enable_placement_id": 0, 00:04:23.364 "enable_zerocopy_send_server": true, 00:04:23.364 "enable_zerocopy_send_client": false, 00:04:23.364 "zerocopy_threshold": 0, 00:04:23.364 "tls_version": 0, 00:04:23.364 "enable_ktls": false 00:04:23.364 } 00:04:23.364 } 00:04:23.364 ] 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "subsystem": "vmd", 00:04:23.364 "config": [] 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "subsystem": "accel", 00:04:23.364 "config": [ 00:04:23.364 { 00:04:23.364 "method": "accel_set_options", 00:04:23.364 "params": { 00:04:23.364 "small_cache_size": 128, 00:04:23.364 "large_cache_size": 16, 00:04:23.364 "task_count": 2048, 00:04:23.364 "sequence_count": 2048, 00:04:23.364 "buf_count": 2048 00:04:23.364 } 00:04:23.364 } 00:04:23.364 ] 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "subsystem": "bdev", 00:04:23.364 "config": [ 00:04:23.364 { 00:04:23.364 "method": "bdev_set_options", 00:04:23.364 "params": { 00:04:23.364 "bdev_io_pool_size": 65535, 00:04:23.364 "bdev_io_cache_size": 256, 00:04:23.364 "bdev_auto_examine": true, 00:04:23.364 "iobuf_small_cache_size": 128, 00:04:23.364 "iobuf_large_cache_size": 16 00:04:23.364 } 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "method": "bdev_raid_set_options", 00:04:23.364 "params": { 00:04:23.364 "process_window_size_kb": 1024, 00:04:23.364 "process_max_bandwidth_mb_sec": 0 00:04:23.364 } 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "method": "bdev_iscsi_set_options", 00:04:23.364 "params": { 00:04:23.364 "timeout_sec": 30 00:04:23.364 } 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "method": "bdev_nvme_set_options", 00:04:23.364 "params": { 00:04:23.364 "action_on_timeout": "none", 00:04:23.364 "timeout_us": 0, 00:04:23.364 "timeout_admin_us": 0, 00:04:23.364 "keep_alive_timeout_ms": 10000, 00:04:23.364 "arbitration_burst": 0, 00:04:23.364 "low_priority_weight": 0, 00:04:23.364 "medium_priority_weight": 0, 00:04:23.364 "high_priority_weight": 0, 00:04:23.364 "nvme_adminq_poll_period_us": 10000, 00:04:23.364 "nvme_ioq_poll_period_us": 0, 00:04:23.364 "io_queue_requests": 0, 00:04:23.364 "delay_cmd_submit": true, 00:04:23.364 "transport_retry_count": 4, 00:04:23.364 "bdev_retry_count": 3, 00:04:23.364 "transport_ack_timeout": 0, 00:04:23.364 "ctrlr_loss_timeout_sec": 0, 00:04:23.364 "reconnect_delay_sec": 0, 00:04:23.364 "fast_io_fail_timeout_sec": 0, 00:04:23.364 "disable_auto_failback": false, 00:04:23.364 "generate_uuids": false, 00:04:23.364 "transport_tos": 0, 00:04:23.364 "nvme_error_stat": false, 00:04:23.364 "rdma_srq_size": 0, 00:04:23.364 "io_path_stat": false, 00:04:23.364 "allow_accel_sequence": false, 00:04:23.364 "rdma_max_cq_size": 0, 00:04:23.364 "rdma_cm_event_timeout_ms": 0, 00:04:23.364 "dhchap_digests": [ 00:04:23.364 "sha256", 00:04:23.364 "sha384", 00:04:23.364 "sha512" 00:04:23.364 ], 00:04:23.364 "dhchap_dhgroups": [ 00:04:23.364 "null", 00:04:23.364 "ffdhe2048", 00:04:23.364 "ffdhe3072", 00:04:23.364 "ffdhe4096", 00:04:23.364 "ffdhe6144", 00:04:23.364 "ffdhe8192" 00:04:23.364 ] 00:04:23.364 } 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "method": "bdev_nvme_set_hotplug", 00:04:23.364 "params": { 00:04:23.364 "period_us": 100000, 00:04:23.364 "enable": false 00:04:23.364 } 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "method": "bdev_wait_for_examine" 00:04:23.364 } 00:04:23.364 ] 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "subsystem": "scsi", 00:04:23.364 "config": null 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "subsystem": "scheduler", 00:04:23.364 "config": [ 00:04:23.364 { 00:04:23.364 "method": "framework_set_scheduler", 00:04:23.364 "params": { 00:04:23.364 "name": "static" 00:04:23.364 } 00:04:23.364 } 00:04:23.364 ] 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "subsystem": "vhost_scsi", 00:04:23.364 "config": [] 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "subsystem": "vhost_blk", 00:04:23.364 "config": [] 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "subsystem": "ublk", 00:04:23.364 "config": [] 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "subsystem": "nbd", 00:04:23.364 "config": [] 00:04:23.364 }, 00:04:23.364 { 00:04:23.364 "subsystem": "nvmf", 00:04:23.364 "config": [ 00:04:23.364 { 00:04:23.364 "method": "nvmf_set_config", 00:04:23.364 "params": { 00:04:23.364 "discovery_filter": "match_any", 00:04:23.364 "admin_cmd_passthru": { 00:04:23.364 "identify_ctrlr": false 00:04:23.364 }, 00:04:23.364 "dhchap_digests": [ 00:04:23.364 "sha256", 00:04:23.364 "sha384", 00:04:23.364 "sha512" 00:04:23.364 ], 00:04:23.364 "dhchap_dhgroups": [ 00:04:23.364 "null", 00:04:23.364 "ffdhe2048", 00:04:23.364 "ffdhe3072", 00:04:23.364 "ffdhe4096", 00:04:23.365 "ffdhe6144", 00:04:23.365 "ffdhe8192" 00:04:23.365 ] 00:04:23.365 } 00:04:23.365 }, 00:04:23.365 { 00:04:23.365 "method": "nvmf_set_max_subsystems", 00:04:23.365 "params": { 00:04:23.365 "max_subsystems": 1024 00:04:23.365 } 00:04:23.365 }, 00:04:23.365 { 00:04:23.365 "method": "nvmf_set_crdt", 00:04:23.365 "params": { 00:04:23.365 "crdt1": 0, 00:04:23.365 "crdt2": 0, 00:04:23.365 "crdt3": 0 00:04:23.365 } 00:04:23.365 }, 00:04:23.365 { 00:04:23.365 "method": "nvmf_create_transport", 00:04:23.365 "params": { 00:04:23.365 "trtype": "TCP", 00:04:23.365 "max_queue_depth": 128, 00:04:23.365 "max_io_qpairs_per_ctrlr": 127, 00:04:23.365 "in_capsule_data_size": 4096, 00:04:23.365 "max_io_size": 131072, 00:04:23.365 "io_unit_size": 131072, 00:04:23.365 "max_aq_depth": 128, 00:04:23.365 "num_shared_buffers": 511, 00:04:23.365 "buf_cache_size": 4294967295, 00:04:23.365 "dif_insert_or_strip": false, 00:04:23.365 "zcopy": false, 00:04:23.365 "c2h_success": true, 00:04:23.365 "sock_priority": 0, 00:04:23.365 "abort_timeout_sec": 1, 00:04:23.365 "ack_timeout": 0, 00:04:23.365 "data_wr_pool_size": 0 00:04:23.365 } 00:04:23.365 } 00:04:23.365 ] 00:04:23.365 }, 00:04:23.365 { 00:04:23.365 "subsystem": "iscsi", 00:04:23.365 "config": [ 00:04:23.365 { 00:04:23.365 "method": "iscsi_set_options", 00:04:23.365 "params": { 00:04:23.365 "node_base": "iqn.2016-06.io.spdk", 00:04:23.365 "max_sessions": 128, 00:04:23.365 "max_connections_per_session": 2, 00:04:23.365 "max_queue_depth": 64, 00:04:23.365 "default_time2wait": 2, 00:04:23.365 "default_time2retain": 20, 00:04:23.365 "first_burst_length": 8192, 00:04:23.365 "immediate_data": true, 00:04:23.365 "allow_duplicated_isid": false, 00:04:23.365 "error_recovery_level": 0, 00:04:23.365 "nop_timeout": 60, 00:04:23.365 "nop_in_interval": 30, 00:04:23.365 "disable_chap": false, 00:04:23.365 "require_chap": false, 00:04:23.365 "mutual_chap": false, 00:04:23.365 "chap_group": 0, 00:04:23.365 "max_large_datain_per_connection": 64, 00:04:23.365 "max_r2t_per_connection": 4, 00:04:23.365 "pdu_pool_size": 36864, 00:04:23.365 "immediate_data_pool_size": 16384, 00:04:23.365 "data_out_pool_size": 2048 00:04:23.365 } 00:04:23.365 } 00:04:23.365 ] 00:04:23.365 } 00:04:23.365 ] 00:04:23.365 } 00:04:23.365 10:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:23.365 10:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 3032553 00:04:23.365 10:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3032553 ']' 00:04:23.365 10:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3032553 00:04:23.365 10:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:23.365 10:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:23.365 10:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3032553 00:04:23.624 10:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:23.624 10:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:23.624 10:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3032553' 00:04:23.624 killing process with pid 3032553 00:04:23.624 10:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3032553 00:04:23.624 10:22:04 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3032553 00:04:23.883 10:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:23.883 10:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=3032733 00:04:23.883 10:22:04 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 3032733 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 3032733 ']' 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 3032733 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3032733 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3032733' 00:04:29.214 killing process with pid 3032733 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 3032733 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 3032733 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:29.214 00:04:29.214 real 0m6.274s 00:04:29.214 user 0m5.977s 00:04:29.214 sys 0m0.591s 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:29.214 ************************************ 00:04:29.214 END TEST skip_rpc_with_json 00:04:29.214 ************************************ 00:04:29.214 10:22:09 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:29.214 10:22:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.214 10:22:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.214 10:22:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.214 ************************************ 00:04:29.214 START TEST skip_rpc_with_delay 00:04:29.214 ************************************ 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:29.214 [2024-11-20 10:22:09.887889] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:29.214 00:04:29.214 real 0m0.070s 00:04:29.214 user 0m0.041s 00:04:29.214 sys 0m0.028s 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:29.214 10:22:09 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:29.214 ************************************ 00:04:29.214 END TEST skip_rpc_with_delay 00:04:29.214 ************************************ 00:04:29.214 10:22:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:29.214 10:22:09 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:29.214 10:22:09 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:29.214 10:22:09 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:29.214 10:22:09 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:29.214 10:22:09 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:29.473 ************************************ 00:04:29.473 START TEST exit_on_failed_rpc_init 00:04:29.473 ************************************ 00:04:29.473 10:22:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:29.473 10:22:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=3033704 00:04:29.473 10:22:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:29.473 10:22:09 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 3033704 00:04:29.473 10:22:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 3033704 ']' 00:04:29.473 10:22:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:29.473 10:22:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:29.473 10:22:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:29.473 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:29.473 10:22:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:29.473 10:22:09 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:29.473 [2024-11-20 10:22:10.022700] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:29.473 [2024-11-20 10:22:10.022743] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3033704 ] 00:04:29.473 [2024-11-20 10:22:10.097597] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.473 [2024-11-20 10:22:10.140438] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.732 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:29.732 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:29.732 10:22:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:29.732 10:22:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.732 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:29.732 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.732 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.733 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.733 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.733 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.733 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.733 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:29.733 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:29.733 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:29.733 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:29.733 [2024-11-20 10:22:10.418048] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:29.733 [2024-11-20 10:22:10.418091] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3033790 ] 00:04:29.992 [2024-11-20 10:22:10.473759] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:29.992 [2024-11-20 10:22:10.514340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:29.992 [2024-11-20 10:22:10.514396] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:29.992 [2024-11-20 10:22:10.514405] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:29.992 [2024-11-20 10:22:10.514411] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:29.992 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:29.992 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:29.992 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:29.992 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:29.992 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:29.992 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:29.992 10:22:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:29.992 10:22:10 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 3033704 00:04:29.992 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 3033704 ']' 00:04:29.992 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 3033704 00:04:29.992 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:29.992 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:29.992 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3033704 00:04:29.992 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:29.992 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:29.992 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3033704' 00:04:29.992 killing process with pid 3033704 00:04:29.992 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 3033704 00:04:29.993 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 3033704 00:04:30.252 00:04:30.252 real 0m0.936s 00:04:30.252 user 0m1.005s 00:04:30.252 sys 0m0.360s 00:04:30.252 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.252 10:22:10 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:30.252 ************************************ 00:04:30.252 END TEST exit_on_failed_rpc_init 00:04:30.252 ************************************ 00:04:30.252 10:22:10 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:30.252 00:04:30.252 real 0m13.113s 00:04:30.252 user 0m12.370s 00:04:30.252 sys 0m1.536s 00:04:30.252 10:22:10 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.252 10:22:10 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:30.252 ************************************ 00:04:30.252 END TEST skip_rpc 00:04:30.252 ************************************ 00:04:30.511 10:22:10 -- spdk/autotest.sh@158 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:30.511 10:22:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.511 10:22:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.511 10:22:10 -- common/autotest_common.sh@10 -- # set +x 00:04:30.511 ************************************ 00:04:30.511 START TEST rpc_client 00:04:30.511 ************************************ 00:04:30.511 10:22:11 rpc_client -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:30.511 * Looking for test storage... 00:04:30.511 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:30.511 10:22:11 rpc_client -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.511 10:22:11 rpc_client -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.511 10:22:11 rpc_client -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.511 10:22:11 rpc_client -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.511 10:22:11 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:30.511 10:22:11 rpc_client -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.511 10:22:11 rpc_client -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.511 --rc genhtml_branch_coverage=1 00:04:30.511 --rc genhtml_function_coverage=1 00:04:30.511 --rc genhtml_legend=1 00:04:30.511 --rc geninfo_all_blocks=1 00:04:30.511 --rc geninfo_unexecuted_blocks=1 00:04:30.511 00:04:30.511 ' 00:04:30.511 10:22:11 rpc_client -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.511 --rc genhtml_branch_coverage=1 00:04:30.511 --rc genhtml_function_coverage=1 00:04:30.511 --rc genhtml_legend=1 00:04:30.511 --rc geninfo_all_blocks=1 00:04:30.511 --rc geninfo_unexecuted_blocks=1 00:04:30.511 00:04:30.511 ' 00:04:30.511 10:22:11 rpc_client -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.511 --rc genhtml_branch_coverage=1 00:04:30.511 --rc genhtml_function_coverage=1 00:04:30.511 --rc genhtml_legend=1 00:04:30.511 --rc geninfo_all_blocks=1 00:04:30.511 --rc geninfo_unexecuted_blocks=1 00:04:30.511 00:04:30.511 ' 00:04:30.511 10:22:11 rpc_client -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.511 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.511 --rc genhtml_branch_coverage=1 00:04:30.511 --rc genhtml_function_coverage=1 00:04:30.511 --rc genhtml_legend=1 00:04:30.511 --rc geninfo_all_blocks=1 00:04:30.511 --rc geninfo_unexecuted_blocks=1 00:04:30.511 00:04:30.511 ' 00:04:30.511 10:22:11 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:30.511 OK 00:04:30.511 10:22:11 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:30.511 00:04:30.511 real 0m0.201s 00:04:30.511 user 0m0.126s 00:04:30.511 sys 0m0.088s 00:04:30.511 10:22:11 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:30.511 10:22:11 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:30.511 ************************************ 00:04:30.511 END TEST rpc_client 00:04:30.511 ************************************ 00:04:30.771 10:22:11 -- spdk/autotest.sh@159 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:30.771 10:22:11 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:30.771 10:22:11 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:30.771 10:22:11 -- common/autotest_common.sh@10 -- # set +x 00:04:30.771 ************************************ 00:04:30.771 START TEST json_config 00:04:30.771 ************************************ 00:04:30.771 10:22:11 json_config -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:30.771 10:22:11 json_config -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:30.771 10:22:11 json_config -- common/autotest_common.sh@1693 -- # lcov --version 00:04:30.771 10:22:11 json_config -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:30.771 10:22:11 json_config -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:30.771 10:22:11 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:30.771 10:22:11 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:30.771 10:22:11 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:30.771 10:22:11 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:30.771 10:22:11 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:30.771 10:22:11 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:30.771 10:22:11 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:30.771 10:22:11 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:30.771 10:22:11 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:30.771 10:22:11 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:30.771 10:22:11 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:30.771 10:22:11 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:30.771 10:22:11 json_config -- scripts/common.sh@345 -- # : 1 00:04:30.771 10:22:11 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:30.771 10:22:11 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:30.771 10:22:11 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:30.772 10:22:11 json_config -- scripts/common.sh@353 -- # local d=1 00:04:30.772 10:22:11 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:30.772 10:22:11 json_config -- scripts/common.sh@355 -- # echo 1 00:04:30.772 10:22:11 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:30.772 10:22:11 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:30.772 10:22:11 json_config -- scripts/common.sh@353 -- # local d=2 00:04:30.772 10:22:11 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:30.772 10:22:11 json_config -- scripts/common.sh@355 -- # echo 2 00:04:30.772 10:22:11 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:30.772 10:22:11 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:30.772 10:22:11 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:30.772 10:22:11 json_config -- scripts/common.sh@368 -- # return 0 00:04:30.772 10:22:11 json_config -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:30.772 10:22:11 json_config -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:30.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.772 --rc genhtml_branch_coverage=1 00:04:30.772 --rc genhtml_function_coverage=1 00:04:30.772 --rc genhtml_legend=1 00:04:30.772 --rc geninfo_all_blocks=1 00:04:30.772 --rc geninfo_unexecuted_blocks=1 00:04:30.772 00:04:30.772 ' 00:04:30.772 10:22:11 json_config -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:30.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.772 --rc genhtml_branch_coverage=1 00:04:30.772 --rc genhtml_function_coverage=1 00:04:30.772 --rc genhtml_legend=1 00:04:30.772 --rc geninfo_all_blocks=1 00:04:30.772 --rc geninfo_unexecuted_blocks=1 00:04:30.772 00:04:30.772 ' 00:04:30.772 10:22:11 json_config -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:30.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.772 --rc genhtml_branch_coverage=1 00:04:30.772 --rc genhtml_function_coverage=1 00:04:30.772 --rc genhtml_legend=1 00:04:30.772 --rc geninfo_all_blocks=1 00:04:30.772 --rc geninfo_unexecuted_blocks=1 00:04:30.772 00:04:30.772 ' 00:04:30.772 10:22:11 json_config -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:30.772 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:30.772 --rc genhtml_branch_coverage=1 00:04:30.772 --rc genhtml_function_coverage=1 00:04:30.772 --rc genhtml_legend=1 00:04:30.772 --rc geninfo_all_blocks=1 00:04:30.772 --rc geninfo_unexecuted_blocks=1 00:04:30.772 00:04:30.772 ' 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:30.772 10:22:11 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:30.772 10:22:11 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:30.772 10:22:11 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:30.772 10:22:11 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:30.772 10:22:11 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.772 10:22:11 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.772 10:22:11 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.772 10:22:11 json_config -- paths/export.sh@5 -- # export PATH 00:04:30.772 10:22:11 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:04:30.772 10:22:11 json_config -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:04:30.772 10:22:11 json_config -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:04:30.772 10:22:11 json_config -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@50 -- # : 0 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:04:30.772 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:04:30.772 10:22:11 json_config -- nvmf/common.sh@54 -- # have_pci_nics=0 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@362 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@363 -- # echo 'INFO: JSON configuration test init' 00:04:30.772 INFO: JSON configuration test init 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@364 -- # json_config_test_init 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@269 -- # timing_enter json_config_test_init 00:04:30.772 10:22:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.772 10:22:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@270 -- # timing_enter json_config_setup_target 00:04:30.772 10:22:11 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:30.772 10:22:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:30.772 10:22:11 json_config -- json_config/json_config.sh@272 -- # json_config_test_start_app target --wait-for-rpc 00:04:30.772 10:22:11 json_config -- json_config/common.sh@9 -- # local app=target 00:04:30.772 10:22:11 json_config -- json_config/common.sh@10 -- # shift 00:04:30.772 10:22:11 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:30.772 10:22:11 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:30.772 10:22:11 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:30.772 10:22:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.772 10:22:11 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:30.772 10:22:11 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3034070 00:04:30.772 10:22:11 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:30.772 Waiting for target to run... 00:04:30.772 10:22:11 json_config -- json_config/common.sh@25 -- # waitforlisten 3034070 /var/tmp/spdk_tgt.sock 00:04:30.772 10:22:11 json_config -- common/autotest_common.sh@835 -- # '[' -z 3034070 ']' 00:04:30.772 10:22:11 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:30.772 10:22:11 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:30.772 10:22:11 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:30.773 10:22:11 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:30.773 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:30.773 10:22:11 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:30.773 10:22:11 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.032 [2024-11-20 10:22:11.532078] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:31.032 [2024-11-20 10:22:11.532122] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3034070 ] 00:04:31.291 [2024-11-20 10:22:11.823339] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:31.291 [2024-11-20 10:22:11.862372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:31.858 10:22:12 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:31.858 10:22:12 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:31.858 10:22:12 json_config -- json_config/common.sh@26 -- # echo '' 00:04:31.858 00:04:31.858 10:22:12 json_config -- json_config/json_config.sh@276 -- # create_accel_config 00:04:31.858 10:22:12 json_config -- json_config/json_config.sh@100 -- # timing_enter create_accel_config 00:04:31.858 10:22:12 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:31.858 10:22:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.858 10:22:12 json_config -- json_config/json_config.sh@102 -- # [[ 0 -eq 1 ]] 00:04:31.858 10:22:12 json_config -- json_config/json_config.sh@108 -- # timing_exit create_accel_config 00:04:31.858 10:22:12 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:31.858 10:22:12 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:31.858 10:22:12 json_config -- json_config/json_config.sh@280 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:31.858 10:22:12 json_config -- json_config/json_config.sh@281 -- # tgt_rpc load_config 00:04:31.858 10:22:12 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@283 -- # tgt_check_notification_types 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:04:35.147 10:22:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.147 10:22:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@47 -- # [[ y == y ]] 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@48 -- # enabled_types+=("fsdev_register" "fsdev_unregister") 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@51 -- # tgt_rpc notify_get_types 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@51 -- # jq -r '.[]' 00:04:35.147 10:22:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@51 -- # get_types=('fsdev_register' 'fsdev_unregister' 'bdev_register' 'bdev_unregister') 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@51 -- # local get_types 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@53 -- # local type_diff 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@54 -- # echo bdev_register bdev_unregister fsdev_register fsdev_unregister fsdev_register fsdev_unregister bdev_register bdev_unregister 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@54 -- # tr ' ' '\n' 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@54 -- # sort 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@54 -- # uniq -u 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@54 -- # type_diff= 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@56 -- # [[ -n '' ]] 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@61 -- # timing_exit tgt_check_notification_types 00:04:35.147 10:22:15 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:35.147 10:22:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@62 -- # return 0 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@285 -- # [[ 0 -eq 1 ]] 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@289 -- # [[ 0 -eq 1 ]] 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@293 -- # [[ 0 -eq 1 ]] 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@297 -- # [[ 1 -eq 1 ]] 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@298 -- # create_nvmf_subsystem_config 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@237 -- # timing_enter create_nvmf_subsystem_config 00:04:35.147 10:22:15 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:35.147 10:22:15 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@239 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@240 -- # [[ tcp == \r\d\m\a ]] 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@244 -- # [[ -z 127.0.0.1 ]] 00:04:35.147 10:22:15 json_config -- json_config/json_config.sh@249 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:35.147 10:22:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:04:35.406 MallocForNvmf0 00:04:35.406 10:22:15 json_config -- json_config/json_config.sh@250 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:35.406 10:22:15 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:04:35.406 MallocForNvmf1 00:04:35.406 10:22:16 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:04:35.406 10:22:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:04:35.666 [2024-11-20 10:22:16.286308] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:35.666 10:22:16 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:35.666 10:22:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:04:35.924 10:22:16 json_config -- json_config/json_config.sh@254 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:35.924 10:22:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:04:36.183 10:22:16 json_config -- json_config/json_config.sh@255 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:36.183 10:22:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:04:36.183 10:22:16 json_config -- json_config/json_config.sh@256 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:36.183 10:22:16 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:04:36.442 [2024-11-20 10:22:16.992549] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:36.442 10:22:17 json_config -- json_config/json_config.sh@258 -- # timing_exit create_nvmf_subsystem_config 00:04:36.442 10:22:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.442 10:22:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.442 10:22:17 json_config -- json_config/json_config.sh@300 -- # timing_exit json_config_setup_target 00:04:36.442 10:22:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.442 10:22:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.442 10:22:17 json_config -- json_config/json_config.sh@302 -- # [[ 0 -eq 1 ]] 00:04:36.442 10:22:17 json_config -- json_config/json_config.sh@307 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:36.442 10:22:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:04:36.701 MallocBdevForConfigChangeCheck 00:04:36.701 10:22:17 json_config -- json_config/json_config.sh@309 -- # timing_exit json_config_test_init 00:04:36.701 10:22:17 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:36.701 10:22:17 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.701 10:22:17 json_config -- json_config/json_config.sh@366 -- # tgt_rpc save_config 00:04:36.701 10:22:17 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:36.959 10:22:17 json_config -- json_config/json_config.sh@368 -- # echo 'INFO: shutting down applications...' 00:04:36.959 INFO: shutting down applications... 00:04:36.959 10:22:17 json_config -- json_config/json_config.sh@369 -- # [[ 0 -eq 1 ]] 00:04:36.959 10:22:17 json_config -- json_config/json_config.sh@375 -- # json_config_clear target 00:04:36.959 10:22:17 json_config -- json_config/json_config.sh@339 -- # [[ -n 22 ]] 00:04:36.959 10:22:17 json_config -- json_config/json_config.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:04:39.491 Calling clear_iscsi_subsystem 00:04:39.491 Calling clear_nvmf_subsystem 00:04:39.491 Calling clear_nbd_subsystem 00:04:39.491 Calling clear_ublk_subsystem 00:04:39.491 Calling clear_vhost_blk_subsystem 00:04:39.491 Calling clear_vhost_scsi_subsystem 00:04:39.491 Calling clear_bdev_subsystem 00:04:39.491 10:22:19 json_config -- json_config/json_config.sh@344 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:04:39.491 10:22:19 json_config -- json_config/json_config.sh@350 -- # count=100 00:04:39.491 10:22:19 json_config -- json_config/json_config.sh@351 -- # '[' 100 -gt 0 ']' 00:04:39.491 10:22:19 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:39.491 10:22:19 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:04:39.491 10:22:19 json_config -- json_config/json_config.sh@352 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:04:39.491 10:22:20 json_config -- json_config/json_config.sh@352 -- # break 00:04:39.491 10:22:20 json_config -- json_config/json_config.sh@357 -- # '[' 100 -eq 0 ']' 00:04:39.491 10:22:20 json_config -- json_config/json_config.sh@376 -- # json_config_test_shutdown_app target 00:04:39.491 10:22:20 json_config -- json_config/common.sh@31 -- # local app=target 00:04:39.491 10:22:20 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:39.491 10:22:20 json_config -- json_config/common.sh@35 -- # [[ -n 3034070 ]] 00:04:39.492 10:22:20 json_config -- json_config/common.sh@38 -- # kill -SIGINT 3034070 00:04:39.492 10:22:20 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:39.492 10:22:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.492 10:22:20 json_config -- json_config/common.sh@41 -- # kill -0 3034070 00:04:39.492 10:22:20 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:04:40.060 10:22:20 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:04:40.060 10:22:20 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:40.060 10:22:20 json_config -- json_config/common.sh@41 -- # kill -0 3034070 00:04:40.060 10:22:20 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:40.060 10:22:20 json_config -- json_config/common.sh@43 -- # break 00:04:40.060 10:22:20 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:40.060 10:22:20 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:40.060 SPDK target shutdown done 00:04:40.060 10:22:20 json_config -- json_config/json_config.sh@378 -- # echo 'INFO: relaunching applications...' 00:04:40.060 INFO: relaunching applications... 00:04:40.060 10:22:20 json_config -- json_config/json_config.sh@379 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:40.060 10:22:20 json_config -- json_config/common.sh@9 -- # local app=target 00:04:40.060 10:22:20 json_config -- json_config/common.sh@10 -- # shift 00:04:40.060 10:22:20 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:40.060 10:22:20 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:40.060 10:22:20 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:40.060 10:22:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:40.060 10:22:20 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:40.060 10:22:20 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=3035808 00:04:40.060 10:22:20 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:40.060 Waiting for target to run... 00:04:40.060 10:22:20 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:40.060 10:22:20 json_config -- json_config/common.sh@25 -- # waitforlisten 3035808 /var/tmp/spdk_tgt.sock 00:04:40.060 10:22:20 json_config -- common/autotest_common.sh@835 -- # '[' -z 3035808 ']' 00:04:40.060 10:22:20 json_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:40.060 10:22:20 json_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.060 10:22:20 json_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:40.060 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:40.060 10:22:20 json_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.061 10:22:20 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:40.061 [2024-11-20 10:22:20.680382] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:40.061 [2024-11-20 10:22:20.680444] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3035808 ] 00:04:40.663 [2024-11-20 10:22:21.145474] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.663 [2024-11-20 10:22:21.203689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.951 [2024-11-20 10:22:24.236224] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:43.951 [2024-11-20 10:22:24.268602] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:04:44.211 10:22:24 json_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:44.211 10:22:24 json_config -- common/autotest_common.sh@868 -- # return 0 00:04:44.211 10:22:24 json_config -- json_config/common.sh@26 -- # echo '' 00:04:44.211 00:04:44.211 10:22:24 json_config -- json_config/json_config.sh@380 -- # [[ 0 -eq 1 ]] 00:04:44.211 10:22:24 json_config -- json_config/json_config.sh@384 -- # echo 'INFO: Checking if target configuration is the same...' 00:04:44.211 INFO: Checking if target configuration is the same... 00:04:44.211 10:22:24 json_config -- json_config/json_config.sh@385 -- # tgt_rpc save_config 00:04:44.211 10:22:24 json_config -- json_config/json_config.sh@385 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.211 10:22:24 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:44.211 + '[' 2 -ne 2 ']' 00:04:44.211 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:44.211 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:44.211 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:44.211 +++ basename /dev/fd/62 00:04:44.211 ++ mktemp /tmp/62.XXX 00:04:44.211 + tmp_file_1=/tmp/62.LrH 00:04:44.211 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:44.211 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:44.211 + tmp_file_2=/tmp/spdk_tgt_config.json.S9c 00:04:44.211 + ret=0 00:04:44.211 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:44.778 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:44.778 + diff -u /tmp/62.LrH /tmp/spdk_tgt_config.json.S9c 00:04:44.778 + echo 'INFO: JSON config files are the same' 00:04:44.778 INFO: JSON config files are the same 00:04:44.778 + rm /tmp/62.LrH /tmp/spdk_tgt_config.json.S9c 00:04:44.778 + exit 0 00:04:44.778 10:22:25 json_config -- json_config/json_config.sh@386 -- # [[ 0 -eq 1 ]] 00:04:44.778 10:22:25 json_config -- json_config/json_config.sh@391 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:04:44.778 INFO: changing configuration and checking if this can be detected... 00:04:44.778 10:22:25 json_config -- json_config/json_config.sh@393 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:44.778 10:22:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:04:45.036 10:22:25 json_config -- json_config/json_config.sh@394 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:45.036 10:22:25 json_config -- json_config/json_config.sh@394 -- # tgt_rpc save_config 00:04:45.037 10:22:25 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:04:45.037 + '[' 2 -ne 2 ']' 00:04:45.037 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:04:45.037 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:04:45.037 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:04:45.037 +++ basename /dev/fd/62 00:04:45.037 ++ mktemp /tmp/62.XXX 00:04:45.037 + tmp_file_1=/tmp/62.HEs 00:04:45.037 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:45.037 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:04:45.037 + tmp_file_2=/tmp/spdk_tgt_config.json.8Xl 00:04:45.037 + ret=0 00:04:45.037 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:45.295 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:04:45.295 + diff -u /tmp/62.HEs /tmp/spdk_tgt_config.json.8Xl 00:04:45.295 + ret=1 00:04:45.295 + echo '=== Start of file: /tmp/62.HEs ===' 00:04:45.295 + cat /tmp/62.HEs 00:04:45.295 + echo '=== End of file: /tmp/62.HEs ===' 00:04:45.295 + echo '' 00:04:45.295 + echo '=== Start of file: /tmp/spdk_tgt_config.json.8Xl ===' 00:04:45.295 + cat /tmp/spdk_tgt_config.json.8Xl 00:04:45.295 + echo '=== End of file: /tmp/spdk_tgt_config.json.8Xl ===' 00:04:45.295 + echo '' 00:04:45.295 + rm /tmp/62.HEs /tmp/spdk_tgt_config.json.8Xl 00:04:45.295 + exit 1 00:04:45.295 10:22:25 json_config -- json_config/json_config.sh@398 -- # echo 'INFO: configuration change detected.' 00:04:45.295 INFO: configuration change detected. 00:04:45.295 10:22:25 json_config -- json_config/json_config.sh@401 -- # json_config_test_fini 00:04:45.295 10:22:25 json_config -- json_config/json_config.sh@313 -- # timing_enter json_config_test_fini 00:04:45.295 10:22:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.295 10:22:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.296 10:22:25 json_config -- json_config/json_config.sh@314 -- # local ret=0 00:04:45.296 10:22:25 json_config -- json_config/json_config.sh@316 -- # [[ -n '' ]] 00:04:45.296 10:22:25 json_config -- json_config/json_config.sh@324 -- # [[ -n 3035808 ]] 00:04:45.296 10:22:25 json_config -- json_config/json_config.sh@327 -- # cleanup_bdev_subsystem_config 00:04:45.296 10:22:25 json_config -- json_config/json_config.sh@191 -- # timing_enter cleanup_bdev_subsystem_config 00:04:45.296 10:22:25 json_config -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:45.296 10:22:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.296 10:22:25 json_config -- json_config/json_config.sh@193 -- # [[ 0 -eq 1 ]] 00:04:45.296 10:22:25 json_config -- json_config/json_config.sh@200 -- # uname -s 00:04:45.296 10:22:25 json_config -- json_config/json_config.sh@200 -- # [[ Linux = Linux ]] 00:04:45.296 10:22:25 json_config -- json_config/json_config.sh@201 -- # rm -f /sample_aio 00:04:45.296 10:22:25 json_config -- json_config/json_config.sh@204 -- # [[ 0 -eq 1 ]] 00:04:45.296 10:22:25 json_config -- json_config/json_config.sh@208 -- # timing_exit cleanup_bdev_subsystem_config 00:04:45.296 10:22:25 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:45.296 10:22:25 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:45.296 10:22:25 json_config -- json_config/json_config.sh@330 -- # killprocess 3035808 00:04:45.296 10:22:25 json_config -- common/autotest_common.sh@954 -- # '[' -z 3035808 ']' 00:04:45.296 10:22:25 json_config -- common/autotest_common.sh@958 -- # kill -0 3035808 00:04:45.296 10:22:25 json_config -- common/autotest_common.sh@959 -- # uname 00:04:45.296 10:22:25 json_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:45.296 10:22:25 json_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3035808 00:04:45.555 10:22:26 json_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:45.555 10:22:26 json_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:45.555 10:22:26 json_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3035808' 00:04:45.555 killing process with pid 3035808 00:04:45.555 10:22:26 json_config -- common/autotest_common.sh@973 -- # kill 3035808 00:04:45.555 10:22:26 json_config -- common/autotest_common.sh@978 -- # wait 3035808 00:04:47.458 10:22:28 json_config -- json_config/json_config.sh@333 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:04:47.458 10:22:28 json_config -- json_config/json_config.sh@334 -- # timing_exit json_config_test_fini 00:04:47.458 10:22:28 json_config -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:47.458 10:22:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.458 10:22:28 json_config -- json_config/json_config.sh@335 -- # return 0 00:04:47.458 10:22:28 json_config -- json_config/json_config.sh@403 -- # echo 'INFO: Success' 00:04:47.458 INFO: Success 00:04:47.458 00:04:47.458 real 0m16.842s 00:04:47.458 user 0m17.356s 00:04:47.458 sys 0m2.600s 00:04:47.458 10:22:28 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:47.458 10:22:28 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:47.458 ************************************ 00:04:47.458 END TEST json_config 00:04:47.458 ************************************ 00:04:47.458 10:22:28 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:47.458 10:22:28 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:47.458 10:22:28 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:47.458 10:22:28 -- common/autotest_common.sh@10 -- # set +x 00:04:47.718 ************************************ 00:04:47.718 START TEST json_config_extra_key 00:04:47.718 ************************************ 00:04:47.718 10:22:28 json_config_extra_key -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:04:47.718 10:22:28 json_config_extra_key -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:47.718 10:22:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # lcov --version 00:04:47.718 10:22:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:47.718 10:22:28 json_config_extra_key -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:47.718 10:22:28 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:47.718 10:22:28 json_config_extra_key -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:47.718 10:22:28 json_config_extra_key -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:47.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.718 --rc genhtml_branch_coverage=1 00:04:47.718 --rc genhtml_function_coverage=1 00:04:47.718 --rc genhtml_legend=1 00:04:47.718 --rc geninfo_all_blocks=1 00:04:47.718 --rc geninfo_unexecuted_blocks=1 00:04:47.718 00:04:47.718 ' 00:04:47.718 10:22:28 json_config_extra_key -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:47.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.718 --rc genhtml_branch_coverage=1 00:04:47.718 --rc genhtml_function_coverage=1 00:04:47.718 --rc genhtml_legend=1 00:04:47.718 --rc geninfo_all_blocks=1 00:04:47.718 --rc geninfo_unexecuted_blocks=1 00:04:47.718 00:04:47.718 ' 00:04:47.718 10:22:28 json_config_extra_key -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:47.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.718 --rc genhtml_branch_coverage=1 00:04:47.718 --rc genhtml_function_coverage=1 00:04:47.718 --rc genhtml_legend=1 00:04:47.718 --rc geninfo_all_blocks=1 00:04:47.718 --rc geninfo_unexecuted_blocks=1 00:04:47.718 00:04:47.718 ' 00:04:47.718 10:22:28 json_config_extra_key -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:47.718 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:47.718 --rc genhtml_branch_coverage=1 00:04:47.718 --rc genhtml_function_coverage=1 00:04:47.718 --rc genhtml_legend=1 00:04:47.718 --rc geninfo_all_blocks=1 00:04:47.718 --rc geninfo_unexecuted_blocks=1 00:04:47.718 00:04:47.718 ' 00:04:47.718 10:22:28 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:47.718 10:22:28 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:47.718 10:22:28 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:47.718 10:22:28 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:47.718 10:22:28 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:47.718 10:22:28 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:47.718 10:22:28 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:47.718 10:22:28 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:04:47.718 10:22:28 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:47.718 10:22:28 json_config_extra_key -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:04:47.718 10:22:28 json_config_extra_key -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:04:47.718 10:22:28 json_config_extra_key -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:04:47.719 10:22:28 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:47.719 10:22:28 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:04:47.719 10:22:28 json_config_extra_key -- nvmf/common.sh@19 -- # NET_TYPE=phy-fallback 00:04:47.719 10:22:28 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:47.719 10:22:28 json_config_extra_key -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:47.719 10:22:28 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:47.719 10:22:28 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:47.719 10:22:28 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:47.719 10:22:28 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:47.719 10:22:28 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.719 10:22:28 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.719 10:22:28 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.719 10:22:28 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:47.719 10:22:28 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:47.719 10:22:28 json_config_extra_key -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:04:47.719 10:22:28 json_config_extra_key -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:04:47.719 10:22:28 json_config_extra_key -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:04:47.719 10:22:28 json_config_extra_key -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:04:47.719 10:22:28 json_config_extra_key -- nvmf/common.sh@50 -- # : 0 00:04:47.719 10:22:28 json_config_extra_key -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:04:47.719 10:22:28 json_config_extra_key -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:04:47.719 10:22:28 json_config_extra_key -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:04:47.719 10:22:28 json_config_extra_key -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:47.719 10:22:28 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:47.719 10:22:28 json_config_extra_key -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:04:47.719 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:04:47.719 10:22:28 json_config_extra_key -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:04:47.719 10:22:28 json_config_extra_key -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:04:47.719 10:22:28 json_config_extra_key -- nvmf/common.sh@54 -- # have_pci_nics=0 00:04:47.719 10:22:28 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:47.719 10:22:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:47.719 10:22:28 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:47.719 10:22:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:47.719 10:22:28 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:47.719 10:22:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:47.719 10:22:28 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:47.719 10:22:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:04:47.719 10:22:28 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:47.719 10:22:28 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:47.719 10:22:28 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:47.719 INFO: launching applications... 00:04:47.719 10:22:28 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:47.719 10:22:28 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:47.719 10:22:28 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:47.719 10:22:28 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:47.719 10:22:28 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:47.719 10:22:28 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:47.719 10:22:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.719 10:22:28 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:47.719 10:22:28 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=3037290 00:04:47.719 10:22:28 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:47.719 Waiting for target to run... 00:04:47.719 10:22:28 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 3037290 /var/tmp/spdk_tgt.sock 00:04:47.719 10:22:28 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 3037290 ']' 00:04:47.719 10:22:28 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:04:47.719 10:22:28 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:47.719 10:22:28 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:47.719 10:22:28 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:47.719 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:47.719 10:22:28 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:47.719 10:22:28 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:47.719 [2024-11-20 10:22:28.439588] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:47.719 [2024-11-20 10:22:28.439637] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3037290 ] 00:04:48.287 [2024-11-20 10:22:28.720829] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.287 [2024-11-20 10:22:28.755943] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.546 10:22:29 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:48.546 10:22:29 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:48.546 10:22:29 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:48.546 00:04:48.546 10:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:48.546 INFO: shutting down applications... 00:04:48.546 10:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:48.546 10:22:29 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:48.546 10:22:29 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:48.546 10:22:29 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 3037290 ]] 00:04:48.546 10:22:29 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 3037290 00:04:48.546 10:22:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:48.546 10:22:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:48.546 10:22:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3037290 00:04:48.546 10:22:29 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:49.112 10:22:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:49.112 10:22:29 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:49.112 10:22:29 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 3037290 00:04:49.112 10:22:29 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:49.112 10:22:29 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:49.112 10:22:29 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:49.112 10:22:29 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:49.112 SPDK target shutdown done 00:04:49.112 10:22:29 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:49.112 Success 00:04:49.112 00:04:49.112 real 0m1.575s 00:04:49.112 user 0m1.358s 00:04:49.112 sys 0m0.393s 00:04:49.112 10:22:29 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.112 10:22:29 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:49.112 ************************************ 00:04:49.112 END TEST json_config_extra_key 00:04:49.112 ************************************ 00:04:49.112 10:22:29 -- spdk/autotest.sh@161 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:49.112 10:22:29 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:49.112 10:22:29 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:49.112 10:22:29 -- common/autotest_common.sh@10 -- # set +x 00:04:49.371 ************************************ 00:04:49.371 START TEST alias_rpc 00:04:49.371 ************************************ 00:04:49.371 10:22:29 alias_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:49.371 * Looking for test storage... 00:04:49.371 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:04:49.371 10:22:29 alias_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:49.371 10:22:29 alias_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:04:49.371 10:22:29 alias_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:49.371 10:22:29 alias_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:49.371 10:22:29 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:49.371 10:22:30 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:49.371 10:22:30 alias_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:49.371 10:22:30 alias_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:49.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.371 --rc genhtml_branch_coverage=1 00:04:49.371 --rc genhtml_function_coverage=1 00:04:49.371 --rc genhtml_legend=1 00:04:49.371 --rc geninfo_all_blocks=1 00:04:49.371 --rc geninfo_unexecuted_blocks=1 00:04:49.371 00:04:49.371 ' 00:04:49.371 10:22:30 alias_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:49.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.371 --rc genhtml_branch_coverage=1 00:04:49.371 --rc genhtml_function_coverage=1 00:04:49.371 --rc genhtml_legend=1 00:04:49.371 --rc geninfo_all_blocks=1 00:04:49.371 --rc geninfo_unexecuted_blocks=1 00:04:49.371 00:04:49.371 ' 00:04:49.371 10:22:30 alias_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:49.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.371 --rc genhtml_branch_coverage=1 00:04:49.371 --rc genhtml_function_coverage=1 00:04:49.371 --rc genhtml_legend=1 00:04:49.371 --rc geninfo_all_blocks=1 00:04:49.371 --rc geninfo_unexecuted_blocks=1 00:04:49.371 00:04:49.371 ' 00:04:49.371 10:22:30 alias_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:49.371 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:49.371 --rc genhtml_branch_coverage=1 00:04:49.371 --rc genhtml_function_coverage=1 00:04:49.371 --rc genhtml_legend=1 00:04:49.371 --rc geninfo_all_blocks=1 00:04:49.371 --rc geninfo_unexecuted_blocks=1 00:04:49.371 00:04:49.371 ' 00:04:49.371 10:22:30 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:49.371 10:22:30 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=3037599 00:04:49.371 10:22:30 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 3037599 00:04:49.371 10:22:30 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:49.371 10:22:30 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 3037599 ']' 00:04:49.371 10:22:30 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:49.371 10:22:30 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:49.371 10:22:30 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:49.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:49.371 10:22:30 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:49.371 10:22:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:49.371 [2024-11-20 10:22:30.071263] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:49.371 [2024-11-20 10:22:30.071313] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3037599 ] 00:04:49.630 [2024-11-20 10:22:30.144858] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:49.630 [2024-11-20 10:22:30.185024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:49.888 10:22:30 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:49.888 10:22:30 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:49.888 10:22:30 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:04:49.888 10:22:30 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 3037599 00:04:49.888 10:22:30 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 3037599 ']' 00:04:49.888 10:22:30 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 3037599 00:04:50.148 10:22:30 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:50.148 10:22:30 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:50.148 10:22:30 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3037599 00:04:50.148 10:22:30 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:50.148 10:22:30 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:50.148 10:22:30 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3037599' 00:04:50.148 killing process with pid 3037599 00:04:50.148 10:22:30 alias_rpc -- common/autotest_common.sh@973 -- # kill 3037599 00:04:50.148 10:22:30 alias_rpc -- common/autotest_common.sh@978 -- # wait 3037599 00:04:50.408 00:04:50.408 real 0m1.123s 00:04:50.408 user 0m1.158s 00:04:50.408 sys 0m0.405s 00:04:50.408 10:22:30 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:50.408 10:22:30 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:50.408 ************************************ 00:04:50.408 END TEST alias_rpc 00:04:50.408 ************************************ 00:04:50.408 10:22:30 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:50.408 10:22:31 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:50.408 10:22:31 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:50.408 10:22:31 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.408 10:22:31 -- common/autotest_common.sh@10 -- # set +x 00:04:50.408 ************************************ 00:04:50.408 START TEST spdkcli_tcp 00:04:50.408 ************************************ 00:04:50.408 10:22:31 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:04:50.408 * Looking for test storage... 00:04:50.408 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:04:50.408 10:22:31 spdkcli_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:50.408 10:22:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:04:50.408 10:22:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:50.668 10:22:31 spdkcli_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:50.668 10:22:31 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:50.668 10:22:31 spdkcli_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:50.668 10:22:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:50.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.668 --rc genhtml_branch_coverage=1 00:04:50.668 --rc genhtml_function_coverage=1 00:04:50.668 --rc genhtml_legend=1 00:04:50.668 --rc geninfo_all_blocks=1 00:04:50.668 --rc geninfo_unexecuted_blocks=1 00:04:50.668 00:04:50.668 ' 00:04:50.668 10:22:31 spdkcli_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:50.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.668 --rc genhtml_branch_coverage=1 00:04:50.668 --rc genhtml_function_coverage=1 00:04:50.668 --rc genhtml_legend=1 00:04:50.668 --rc geninfo_all_blocks=1 00:04:50.668 --rc geninfo_unexecuted_blocks=1 00:04:50.668 00:04:50.668 ' 00:04:50.668 10:22:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:50.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.668 --rc genhtml_branch_coverage=1 00:04:50.668 --rc genhtml_function_coverage=1 00:04:50.668 --rc genhtml_legend=1 00:04:50.668 --rc geninfo_all_blocks=1 00:04:50.668 --rc geninfo_unexecuted_blocks=1 00:04:50.668 00:04:50.668 ' 00:04:50.668 10:22:31 spdkcli_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:50.668 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:50.668 --rc genhtml_branch_coverage=1 00:04:50.668 --rc genhtml_function_coverage=1 00:04:50.668 --rc genhtml_legend=1 00:04:50.668 --rc geninfo_all_blocks=1 00:04:50.668 --rc geninfo_unexecuted_blocks=1 00:04:50.668 00:04:50.668 ' 00:04:50.668 10:22:31 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:04:50.668 10:22:31 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:04:50.668 10:22:31 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:04:50.668 10:22:31 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:50.668 10:22:31 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:50.668 10:22:31 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:50.668 10:22:31 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:50.668 10:22:31 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:50.668 10:22:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.668 10:22:31 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=3037889 00:04:50.668 10:22:31 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 3037889 00:04:50.668 10:22:31 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:50.668 10:22:31 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 3037889 ']' 00:04:50.668 10:22:31 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:50.668 10:22:31 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:50.668 10:22:31 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:50.668 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:50.668 10:22:31 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:50.668 10:22:31 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:50.668 [2024-11-20 10:22:31.270763] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:50.668 [2024-11-20 10:22:31.270808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3037889 ] 00:04:50.668 [2024-11-20 10:22:31.343270] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:50.668 [2024-11-20 10:22:31.384259] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:50.668 [2024-11-20 10:22:31.384261] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.607 10:22:32 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:51.607 10:22:32 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:51.607 10:22:32 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=3037903 00:04:51.607 10:22:32 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:51.607 10:22:32 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:51.607 [ 00:04:51.607 "bdev_malloc_delete", 00:04:51.607 "bdev_malloc_create", 00:04:51.607 "bdev_null_resize", 00:04:51.607 "bdev_null_delete", 00:04:51.607 "bdev_null_create", 00:04:51.607 "bdev_nvme_cuse_unregister", 00:04:51.607 "bdev_nvme_cuse_register", 00:04:51.607 "bdev_opal_new_user", 00:04:51.607 "bdev_opal_set_lock_state", 00:04:51.607 "bdev_opal_delete", 00:04:51.607 "bdev_opal_get_info", 00:04:51.607 "bdev_opal_create", 00:04:51.607 "bdev_nvme_opal_revert", 00:04:51.607 "bdev_nvme_opal_init", 00:04:51.607 "bdev_nvme_send_cmd", 00:04:51.607 "bdev_nvme_set_keys", 00:04:51.607 "bdev_nvme_get_path_iostat", 00:04:51.607 "bdev_nvme_get_mdns_discovery_info", 00:04:51.607 "bdev_nvme_stop_mdns_discovery", 00:04:51.607 "bdev_nvme_start_mdns_discovery", 00:04:51.607 "bdev_nvme_set_multipath_policy", 00:04:51.607 "bdev_nvme_set_preferred_path", 00:04:51.607 "bdev_nvme_get_io_paths", 00:04:51.607 "bdev_nvme_remove_error_injection", 00:04:51.607 "bdev_nvme_add_error_injection", 00:04:51.607 "bdev_nvme_get_discovery_info", 00:04:51.607 "bdev_nvme_stop_discovery", 00:04:51.607 "bdev_nvme_start_discovery", 00:04:51.607 "bdev_nvme_get_controller_health_info", 00:04:51.607 "bdev_nvme_disable_controller", 00:04:51.607 "bdev_nvme_enable_controller", 00:04:51.607 "bdev_nvme_reset_controller", 00:04:51.607 "bdev_nvme_get_transport_statistics", 00:04:51.607 "bdev_nvme_apply_firmware", 00:04:51.607 "bdev_nvme_detach_controller", 00:04:51.607 "bdev_nvme_get_controllers", 00:04:51.607 "bdev_nvme_attach_controller", 00:04:51.607 "bdev_nvme_set_hotplug", 00:04:51.607 "bdev_nvme_set_options", 00:04:51.607 "bdev_passthru_delete", 00:04:51.607 "bdev_passthru_create", 00:04:51.607 "bdev_lvol_set_parent_bdev", 00:04:51.607 "bdev_lvol_set_parent", 00:04:51.607 "bdev_lvol_check_shallow_copy", 00:04:51.607 "bdev_lvol_start_shallow_copy", 00:04:51.607 "bdev_lvol_grow_lvstore", 00:04:51.607 "bdev_lvol_get_lvols", 00:04:51.607 "bdev_lvol_get_lvstores", 00:04:51.607 "bdev_lvol_delete", 00:04:51.607 "bdev_lvol_set_read_only", 00:04:51.607 "bdev_lvol_resize", 00:04:51.607 "bdev_lvol_decouple_parent", 00:04:51.607 "bdev_lvol_inflate", 00:04:51.607 "bdev_lvol_rename", 00:04:51.607 "bdev_lvol_clone_bdev", 00:04:51.607 "bdev_lvol_clone", 00:04:51.607 "bdev_lvol_snapshot", 00:04:51.607 "bdev_lvol_create", 00:04:51.607 "bdev_lvol_delete_lvstore", 00:04:51.607 "bdev_lvol_rename_lvstore", 00:04:51.607 "bdev_lvol_create_lvstore", 00:04:51.607 "bdev_raid_set_options", 00:04:51.607 "bdev_raid_remove_base_bdev", 00:04:51.607 "bdev_raid_add_base_bdev", 00:04:51.607 "bdev_raid_delete", 00:04:51.607 "bdev_raid_create", 00:04:51.607 "bdev_raid_get_bdevs", 00:04:51.607 "bdev_error_inject_error", 00:04:51.607 "bdev_error_delete", 00:04:51.607 "bdev_error_create", 00:04:51.607 "bdev_split_delete", 00:04:51.607 "bdev_split_create", 00:04:51.607 "bdev_delay_delete", 00:04:51.607 "bdev_delay_create", 00:04:51.607 "bdev_delay_update_latency", 00:04:51.607 "bdev_zone_block_delete", 00:04:51.607 "bdev_zone_block_create", 00:04:51.607 "blobfs_create", 00:04:51.607 "blobfs_detect", 00:04:51.607 "blobfs_set_cache_size", 00:04:51.607 "bdev_aio_delete", 00:04:51.607 "bdev_aio_rescan", 00:04:51.607 "bdev_aio_create", 00:04:51.607 "bdev_ftl_set_property", 00:04:51.607 "bdev_ftl_get_properties", 00:04:51.607 "bdev_ftl_get_stats", 00:04:51.607 "bdev_ftl_unmap", 00:04:51.607 "bdev_ftl_unload", 00:04:51.607 "bdev_ftl_delete", 00:04:51.607 "bdev_ftl_load", 00:04:51.607 "bdev_ftl_create", 00:04:51.607 "bdev_virtio_attach_controller", 00:04:51.607 "bdev_virtio_scsi_get_devices", 00:04:51.607 "bdev_virtio_detach_controller", 00:04:51.607 "bdev_virtio_blk_set_hotplug", 00:04:51.607 "bdev_iscsi_delete", 00:04:51.607 "bdev_iscsi_create", 00:04:51.607 "bdev_iscsi_set_options", 00:04:51.607 "accel_error_inject_error", 00:04:51.607 "ioat_scan_accel_module", 00:04:51.607 "dsa_scan_accel_module", 00:04:51.607 "iaa_scan_accel_module", 00:04:51.607 "vfu_virtio_create_fs_endpoint", 00:04:51.607 "vfu_virtio_create_scsi_endpoint", 00:04:51.607 "vfu_virtio_scsi_remove_target", 00:04:51.608 "vfu_virtio_scsi_add_target", 00:04:51.608 "vfu_virtio_create_blk_endpoint", 00:04:51.608 "vfu_virtio_delete_endpoint", 00:04:51.608 "keyring_file_remove_key", 00:04:51.608 "keyring_file_add_key", 00:04:51.608 "keyring_linux_set_options", 00:04:51.608 "fsdev_aio_delete", 00:04:51.608 "fsdev_aio_create", 00:04:51.608 "iscsi_get_histogram", 00:04:51.608 "iscsi_enable_histogram", 00:04:51.608 "iscsi_set_options", 00:04:51.608 "iscsi_get_auth_groups", 00:04:51.608 "iscsi_auth_group_remove_secret", 00:04:51.608 "iscsi_auth_group_add_secret", 00:04:51.608 "iscsi_delete_auth_group", 00:04:51.608 "iscsi_create_auth_group", 00:04:51.608 "iscsi_set_discovery_auth", 00:04:51.608 "iscsi_get_options", 00:04:51.608 "iscsi_target_node_request_logout", 00:04:51.608 "iscsi_target_node_set_redirect", 00:04:51.608 "iscsi_target_node_set_auth", 00:04:51.608 "iscsi_target_node_add_lun", 00:04:51.608 "iscsi_get_stats", 00:04:51.608 "iscsi_get_connections", 00:04:51.608 "iscsi_portal_group_set_auth", 00:04:51.608 "iscsi_start_portal_group", 00:04:51.608 "iscsi_delete_portal_group", 00:04:51.608 "iscsi_create_portal_group", 00:04:51.608 "iscsi_get_portal_groups", 00:04:51.608 "iscsi_delete_target_node", 00:04:51.608 "iscsi_target_node_remove_pg_ig_maps", 00:04:51.608 "iscsi_target_node_add_pg_ig_maps", 00:04:51.608 "iscsi_create_target_node", 00:04:51.608 "iscsi_get_target_nodes", 00:04:51.608 "iscsi_delete_initiator_group", 00:04:51.608 "iscsi_initiator_group_remove_initiators", 00:04:51.608 "iscsi_initiator_group_add_initiators", 00:04:51.608 "iscsi_create_initiator_group", 00:04:51.608 "iscsi_get_initiator_groups", 00:04:51.608 "nvmf_set_crdt", 00:04:51.608 "nvmf_set_config", 00:04:51.608 "nvmf_set_max_subsystems", 00:04:51.608 "nvmf_stop_mdns_prr", 00:04:51.608 "nvmf_publish_mdns_prr", 00:04:51.608 "nvmf_subsystem_get_listeners", 00:04:51.608 "nvmf_subsystem_get_qpairs", 00:04:51.608 "nvmf_subsystem_get_controllers", 00:04:51.608 "nvmf_get_stats", 00:04:51.608 "nvmf_get_transports", 00:04:51.608 "nvmf_create_transport", 00:04:51.608 "nvmf_get_targets", 00:04:51.608 "nvmf_delete_target", 00:04:51.608 "nvmf_create_target", 00:04:51.608 "nvmf_subsystem_allow_any_host", 00:04:51.608 "nvmf_subsystem_set_keys", 00:04:51.608 "nvmf_subsystem_remove_host", 00:04:51.608 "nvmf_subsystem_add_host", 00:04:51.608 "nvmf_ns_remove_host", 00:04:51.608 "nvmf_ns_add_host", 00:04:51.608 "nvmf_subsystem_remove_ns", 00:04:51.608 "nvmf_subsystem_set_ns_ana_group", 00:04:51.608 "nvmf_subsystem_add_ns", 00:04:51.608 "nvmf_subsystem_listener_set_ana_state", 00:04:51.608 "nvmf_discovery_get_referrals", 00:04:51.608 "nvmf_discovery_remove_referral", 00:04:51.608 "nvmf_discovery_add_referral", 00:04:51.608 "nvmf_subsystem_remove_listener", 00:04:51.608 "nvmf_subsystem_add_listener", 00:04:51.608 "nvmf_delete_subsystem", 00:04:51.608 "nvmf_create_subsystem", 00:04:51.608 "nvmf_get_subsystems", 00:04:51.608 "env_dpdk_get_mem_stats", 00:04:51.608 "nbd_get_disks", 00:04:51.608 "nbd_stop_disk", 00:04:51.608 "nbd_start_disk", 00:04:51.608 "ublk_recover_disk", 00:04:51.608 "ublk_get_disks", 00:04:51.608 "ublk_stop_disk", 00:04:51.608 "ublk_start_disk", 00:04:51.608 "ublk_destroy_target", 00:04:51.608 "ublk_create_target", 00:04:51.608 "virtio_blk_create_transport", 00:04:51.608 "virtio_blk_get_transports", 00:04:51.608 "vhost_controller_set_coalescing", 00:04:51.608 "vhost_get_controllers", 00:04:51.608 "vhost_delete_controller", 00:04:51.608 "vhost_create_blk_controller", 00:04:51.608 "vhost_scsi_controller_remove_target", 00:04:51.608 "vhost_scsi_controller_add_target", 00:04:51.608 "vhost_start_scsi_controller", 00:04:51.608 "vhost_create_scsi_controller", 00:04:51.608 "thread_set_cpumask", 00:04:51.608 "scheduler_set_options", 00:04:51.608 "framework_get_governor", 00:04:51.608 "framework_get_scheduler", 00:04:51.608 "framework_set_scheduler", 00:04:51.608 "framework_get_reactors", 00:04:51.608 "thread_get_io_channels", 00:04:51.608 "thread_get_pollers", 00:04:51.608 "thread_get_stats", 00:04:51.608 "framework_monitor_context_switch", 00:04:51.608 "spdk_kill_instance", 00:04:51.608 "log_enable_timestamps", 00:04:51.608 "log_get_flags", 00:04:51.608 "log_clear_flag", 00:04:51.608 "log_set_flag", 00:04:51.608 "log_get_level", 00:04:51.608 "log_set_level", 00:04:51.608 "log_get_print_level", 00:04:51.608 "log_set_print_level", 00:04:51.608 "framework_enable_cpumask_locks", 00:04:51.608 "framework_disable_cpumask_locks", 00:04:51.608 "framework_wait_init", 00:04:51.608 "framework_start_init", 00:04:51.608 "scsi_get_devices", 00:04:51.608 "bdev_get_histogram", 00:04:51.608 "bdev_enable_histogram", 00:04:51.608 "bdev_set_qos_limit", 00:04:51.608 "bdev_set_qd_sampling_period", 00:04:51.608 "bdev_get_bdevs", 00:04:51.608 "bdev_reset_iostat", 00:04:51.608 "bdev_get_iostat", 00:04:51.608 "bdev_examine", 00:04:51.608 "bdev_wait_for_examine", 00:04:51.608 "bdev_set_options", 00:04:51.608 "accel_get_stats", 00:04:51.608 "accel_set_options", 00:04:51.608 "accel_set_driver", 00:04:51.608 "accel_crypto_key_destroy", 00:04:51.608 "accel_crypto_keys_get", 00:04:51.608 "accel_crypto_key_create", 00:04:51.608 "accel_assign_opc", 00:04:51.608 "accel_get_module_info", 00:04:51.608 "accel_get_opc_assignments", 00:04:51.608 "vmd_rescan", 00:04:51.608 "vmd_remove_device", 00:04:51.608 "vmd_enable", 00:04:51.608 "sock_get_default_impl", 00:04:51.608 "sock_set_default_impl", 00:04:51.608 "sock_impl_set_options", 00:04:51.608 "sock_impl_get_options", 00:04:51.608 "iobuf_get_stats", 00:04:51.608 "iobuf_set_options", 00:04:51.608 "keyring_get_keys", 00:04:51.608 "vfu_tgt_set_base_path", 00:04:51.608 "framework_get_pci_devices", 00:04:51.608 "framework_get_config", 00:04:51.608 "framework_get_subsystems", 00:04:51.608 "fsdev_set_opts", 00:04:51.608 "fsdev_get_opts", 00:04:51.608 "trace_get_info", 00:04:51.608 "trace_get_tpoint_group_mask", 00:04:51.608 "trace_disable_tpoint_group", 00:04:51.608 "trace_enable_tpoint_group", 00:04:51.608 "trace_clear_tpoint_mask", 00:04:51.608 "trace_set_tpoint_mask", 00:04:51.608 "notify_get_notifications", 00:04:51.608 "notify_get_types", 00:04:51.608 "spdk_get_version", 00:04:51.608 "rpc_get_methods" 00:04:51.608 ] 00:04:51.608 10:22:32 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:51.608 10:22:32 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:51.608 10:22:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:51.608 10:22:32 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:51.608 10:22:32 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 3037889 00:04:51.608 10:22:32 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 3037889 ']' 00:04:51.608 10:22:32 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 3037889 00:04:51.608 10:22:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:51.608 10:22:32 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:51.608 10:22:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3037889 00:04:51.867 10:22:32 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:51.867 10:22:32 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:51.867 10:22:32 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3037889' 00:04:51.867 killing process with pid 3037889 00:04:51.867 10:22:32 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 3037889 00:04:51.867 10:22:32 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 3037889 00:04:52.126 00:04:52.126 real 0m1.627s 00:04:52.126 user 0m3.028s 00:04:52.126 sys 0m0.456s 00:04:52.126 10:22:32 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.126 10:22:32 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:52.126 ************************************ 00:04:52.126 END TEST spdkcli_tcp 00:04:52.126 ************************************ 00:04:52.126 10:22:32 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:52.126 10:22:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.126 10:22:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.126 10:22:32 -- common/autotest_common.sh@10 -- # set +x 00:04:52.126 ************************************ 00:04:52.126 START TEST dpdk_mem_utility 00:04:52.126 ************************************ 00:04:52.126 10:22:32 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:52.126 * Looking for test storage... 00:04:52.126 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:04:52.126 10:22:32 dpdk_mem_utility -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:52.126 10:22:32 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lcov --version 00:04:52.126 10:22:32 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:52.385 10:22:32 dpdk_mem_utility -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:52.385 10:22:32 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.385 10:22:32 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.385 10:22:32 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.385 10:22:32 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.385 10:22:32 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.385 10:22:32 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.385 10:22:32 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.385 10:22:32 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.385 10:22:32 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.386 10:22:32 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.386 10:22:32 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.386 10:22:32 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:52.386 10:22:32 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:52.386 10:22:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.386 10:22:32 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.386 10:22:32 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:52.386 10:22:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:52.386 10:22:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.386 10:22:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:52.386 10:22:32 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.386 10:22:32 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:52.386 10:22:32 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:52.386 10:22:32 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.386 10:22:32 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:52.386 10:22:32 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.386 10:22:32 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.386 10:22:32 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.386 10:22:32 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:52.386 10:22:32 dpdk_mem_utility -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.386 10:22:32 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:52.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.386 --rc genhtml_branch_coverage=1 00:04:52.386 --rc genhtml_function_coverage=1 00:04:52.386 --rc genhtml_legend=1 00:04:52.386 --rc geninfo_all_blocks=1 00:04:52.386 --rc geninfo_unexecuted_blocks=1 00:04:52.386 00:04:52.386 ' 00:04:52.386 10:22:32 dpdk_mem_utility -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:52.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.386 --rc genhtml_branch_coverage=1 00:04:52.386 --rc genhtml_function_coverage=1 00:04:52.386 --rc genhtml_legend=1 00:04:52.386 --rc geninfo_all_blocks=1 00:04:52.386 --rc geninfo_unexecuted_blocks=1 00:04:52.386 00:04:52.386 ' 00:04:52.386 10:22:32 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:52.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.386 --rc genhtml_branch_coverage=1 00:04:52.386 --rc genhtml_function_coverage=1 00:04:52.386 --rc genhtml_legend=1 00:04:52.386 --rc geninfo_all_blocks=1 00:04:52.386 --rc geninfo_unexecuted_blocks=1 00:04:52.386 00:04:52.386 ' 00:04:52.386 10:22:32 dpdk_mem_utility -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:52.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.386 --rc genhtml_branch_coverage=1 00:04:52.386 --rc genhtml_function_coverage=1 00:04:52.386 --rc genhtml_legend=1 00:04:52.386 --rc geninfo_all_blocks=1 00:04:52.386 --rc geninfo_unexecuted_blocks=1 00:04:52.386 00:04:52.386 ' 00:04:52.386 10:22:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:52.386 10:22:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=3038201 00:04:52.386 10:22:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 3038201 00:04:52.386 10:22:32 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:52.386 10:22:32 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 3038201 ']' 00:04:52.386 10:22:32 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.386 10:22:32 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.386 10:22:32 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.386 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.386 10:22:32 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.386 10:22:32 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:52.386 [2024-11-20 10:22:32.959951] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:52.386 [2024-11-20 10:22:32.959997] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3038201 ] 00:04:52.386 [2024-11-20 10:22:33.035089] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:52.386 [2024-11-20 10:22:33.076974] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.645 10:22:33 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:52.645 10:22:33 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:52.645 10:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:52.645 10:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:52.645 10:22:33 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:52.645 10:22:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:52.645 { 00:04:52.645 "filename": "/tmp/spdk_mem_dump.txt" 00:04:52.645 } 00:04:52.645 10:22:33 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:52.645 10:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:04:52.645 DPDK memory size 810.000000 MiB in 1 heap(s) 00:04:52.645 1 heaps totaling size 810.000000 MiB 00:04:52.645 size: 810.000000 MiB heap id: 0 00:04:52.645 end heaps---------- 00:04:52.645 9 mempools totaling size 595.772034 MiB 00:04:52.645 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:52.645 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:52.645 size: 92.545471 MiB name: bdev_io_3038201 00:04:52.645 size: 50.003479 MiB name: msgpool_3038201 00:04:52.645 size: 36.509338 MiB name: fsdev_io_3038201 00:04:52.645 size: 21.763794 MiB name: PDU_Pool 00:04:52.645 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:52.645 size: 4.133484 MiB name: evtpool_3038201 00:04:52.645 size: 0.026123 MiB name: Session_Pool 00:04:52.645 end mempools------- 00:04:52.645 6 memzones totaling size 4.142822 MiB 00:04:52.645 size: 1.000366 MiB name: RG_ring_0_3038201 00:04:52.645 size: 1.000366 MiB name: RG_ring_1_3038201 00:04:52.645 size: 1.000366 MiB name: RG_ring_4_3038201 00:04:52.645 size: 1.000366 MiB name: RG_ring_5_3038201 00:04:52.645 size: 0.125366 MiB name: RG_ring_2_3038201 00:04:52.645 size: 0.015991 MiB name: RG_ring_3_3038201 00:04:52.645 end memzones------- 00:04:52.645 10:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:04:52.905 heap id: 0 total size: 810.000000 MiB number of busy elements: 44 number of free elements: 15 00:04:52.905 list of free elements. size: 10.862488 MiB 00:04:52.905 element at address: 0x200018a00000 with size: 0.999878 MiB 00:04:52.905 element at address: 0x200018c00000 with size: 0.999878 MiB 00:04:52.905 element at address: 0x200000400000 with size: 0.998535 MiB 00:04:52.905 element at address: 0x200031800000 with size: 0.994446 MiB 00:04:52.905 element at address: 0x200006400000 with size: 0.959839 MiB 00:04:52.905 element at address: 0x200012c00000 with size: 0.954285 MiB 00:04:52.905 element at address: 0x200018e00000 with size: 0.936584 MiB 00:04:52.905 element at address: 0x200000200000 with size: 0.717346 MiB 00:04:52.905 element at address: 0x20001a600000 with size: 0.582886 MiB 00:04:52.905 element at address: 0x200000c00000 with size: 0.495422 MiB 00:04:52.905 element at address: 0x20000a600000 with size: 0.490723 MiB 00:04:52.905 element at address: 0x200019000000 with size: 0.485657 MiB 00:04:52.905 element at address: 0x200003e00000 with size: 0.481934 MiB 00:04:52.905 element at address: 0x200027a00000 with size: 0.410034 MiB 00:04:52.905 element at address: 0x200000800000 with size: 0.355042 MiB 00:04:52.905 list of standard malloc elements. size: 199.218628 MiB 00:04:52.905 element at address: 0x20000a7fff80 with size: 132.000122 MiB 00:04:52.905 element at address: 0x2000065fff80 with size: 64.000122 MiB 00:04:52.905 element at address: 0x200018afff80 with size: 1.000122 MiB 00:04:52.905 element at address: 0x200018cfff80 with size: 1.000122 MiB 00:04:52.905 element at address: 0x200018efff80 with size: 1.000122 MiB 00:04:52.905 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:04:52.905 element at address: 0x200018eeff00 with size: 0.062622 MiB 00:04:52.905 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:04:52.905 element at address: 0x200018eefdc0 with size: 0.000305 MiB 00:04:52.905 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:04:52.905 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:04:52.905 element at address: 0x2000004ffa00 with size: 0.000183 MiB 00:04:52.905 element at address: 0x2000004ffac0 with size: 0.000183 MiB 00:04:52.905 element at address: 0x2000004ffb80 with size: 0.000183 MiB 00:04:52.905 element at address: 0x2000004ffd80 with size: 0.000183 MiB 00:04:52.905 element at address: 0x2000004ffe40 with size: 0.000183 MiB 00:04:52.905 element at address: 0x20000085ae40 with size: 0.000183 MiB 00:04:52.905 element at address: 0x20000085b040 with size: 0.000183 MiB 00:04:52.905 element at address: 0x20000085f300 with size: 0.000183 MiB 00:04:52.905 element at address: 0x20000087f5c0 with size: 0.000183 MiB 00:04:52.905 element at address: 0x20000087f680 with size: 0.000183 MiB 00:04:52.905 element at address: 0x2000008ff940 with size: 0.000183 MiB 00:04:52.905 element at address: 0x2000008ffb40 with size: 0.000183 MiB 00:04:52.905 element at address: 0x200000c7ed40 with size: 0.000183 MiB 00:04:52.905 element at address: 0x200000cff000 with size: 0.000183 MiB 00:04:52.905 element at address: 0x200000cff0c0 with size: 0.000183 MiB 00:04:52.905 element at address: 0x200003e7b600 with size: 0.000183 MiB 00:04:52.905 element at address: 0x200003e7b6c0 with size: 0.000183 MiB 00:04:52.905 element at address: 0x200003efb980 with size: 0.000183 MiB 00:04:52.905 element at address: 0x2000064fdd80 with size: 0.000183 MiB 00:04:52.905 element at address: 0x20000a67da00 with size: 0.000183 MiB 00:04:52.905 element at address: 0x20000a67dac0 with size: 0.000183 MiB 00:04:52.905 element at address: 0x20000a6fdd80 with size: 0.000183 MiB 00:04:52.905 element at address: 0x200012cf44c0 with size: 0.000183 MiB 00:04:52.905 element at address: 0x200018eefc40 with size: 0.000183 MiB 00:04:52.905 element at address: 0x200018eefd00 with size: 0.000183 MiB 00:04:52.906 element at address: 0x2000190bc740 with size: 0.000183 MiB 00:04:52.906 element at address: 0x20001a695380 with size: 0.000183 MiB 00:04:52.906 element at address: 0x20001a695440 with size: 0.000183 MiB 00:04:52.906 element at address: 0x200027a68f80 with size: 0.000183 MiB 00:04:52.906 element at address: 0x200027a69040 with size: 0.000183 MiB 00:04:52.906 element at address: 0x200027a6fc40 with size: 0.000183 MiB 00:04:52.906 element at address: 0x200027a6fe40 with size: 0.000183 MiB 00:04:52.906 element at address: 0x200027a6ff00 with size: 0.000183 MiB 00:04:52.906 list of memzone associated elements. size: 599.918884 MiB 00:04:52.906 element at address: 0x20001a695500 with size: 211.416748 MiB 00:04:52.906 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:52.906 element at address: 0x200027a6ffc0 with size: 157.562561 MiB 00:04:52.906 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:52.906 element at address: 0x200012df4780 with size: 92.045044 MiB 00:04:52.906 associated memzone info: size: 92.044922 MiB name: MP_bdev_io_3038201_0 00:04:52.906 element at address: 0x200000dff380 with size: 48.003052 MiB 00:04:52.906 associated memzone info: size: 48.002930 MiB name: MP_msgpool_3038201_0 00:04:52.906 element at address: 0x200003ffdb80 with size: 36.008911 MiB 00:04:52.906 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_3038201_0 00:04:52.906 element at address: 0x2000191be940 with size: 20.255554 MiB 00:04:52.906 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:52.906 element at address: 0x2000319feb40 with size: 18.005066 MiB 00:04:52.906 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:52.906 element at address: 0x2000004fff00 with size: 3.000244 MiB 00:04:52.906 associated memzone info: size: 3.000122 MiB name: MP_evtpool_3038201_0 00:04:52.906 element at address: 0x2000009ffe00 with size: 2.000488 MiB 00:04:52.906 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_3038201 00:04:52.906 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:04:52.906 associated memzone info: size: 1.007996 MiB name: MP_evtpool_3038201 00:04:52.906 element at address: 0x20000a6fde40 with size: 1.008118 MiB 00:04:52.906 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:52.906 element at address: 0x2000190bc800 with size: 1.008118 MiB 00:04:52.906 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:52.906 element at address: 0x2000064fde40 with size: 1.008118 MiB 00:04:52.906 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:52.906 element at address: 0x200003efba40 with size: 1.008118 MiB 00:04:52.906 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:52.906 element at address: 0x200000cff180 with size: 1.000488 MiB 00:04:52.906 associated memzone info: size: 1.000366 MiB name: RG_ring_0_3038201 00:04:52.906 element at address: 0x2000008ffc00 with size: 1.000488 MiB 00:04:52.906 associated memzone info: size: 1.000366 MiB name: RG_ring_1_3038201 00:04:52.906 element at address: 0x200012cf4580 with size: 1.000488 MiB 00:04:52.906 associated memzone info: size: 1.000366 MiB name: RG_ring_4_3038201 00:04:52.906 element at address: 0x2000318fe940 with size: 1.000488 MiB 00:04:52.906 associated memzone info: size: 1.000366 MiB name: RG_ring_5_3038201 00:04:52.906 element at address: 0x20000087f740 with size: 0.500488 MiB 00:04:52.906 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_3038201 00:04:52.906 element at address: 0x200000c7ee00 with size: 0.500488 MiB 00:04:52.906 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_3038201 00:04:52.906 element at address: 0x20000a67db80 with size: 0.500488 MiB 00:04:52.906 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:52.906 element at address: 0x200003e7b780 with size: 0.500488 MiB 00:04:52.906 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:52.906 element at address: 0x20001907c540 with size: 0.250488 MiB 00:04:52.906 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:52.906 element at address: 0x2000002b7a40 with size: 0.125488 MiB 00:04:52.906 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_3038201 00:04:52.906 element at address: 0x20000085f3c0 with size: 0.125488 MiB 00:04:52.906 associated memzone info: size: 0.125366 MiB name: RG_ring_2_3038201 00:04:52.906 element at address: 0x2000064f5b80 with size: 0.031738 MiB 00:04:52.906 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:52.906 element at address: 0x200027a69100 with size: 0.023743 MiB 00:04:52.906 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:52.906 element at address: 0x20000085b100 with size: 0.016113 MiB 00:04:52.906 associated memzone info: size: 0.015991 MiB name: RG_ring_3_3038201 00:04:52.906 element at address: 0x200027a6f240 with size: 0.002441 MiB 00:04:52.906 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:52.906 element at address: 0x2000004ffc40 with size: 0.000305 MiB 00:04:52.906 associated memzone info: size: 0.000183 MiB name: MP_msgpool_3038201 00:04:52.906 element at address: 0x2000008ffa00 with size: 0.000305 MiB 00:04:52.906 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_3038201 00:04:52.906 element at address: 0x20000085af00 with size: 0.000305 MiB 00:04:52.906 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_3038201 00:04:52.906 element at address: 0x200027a6fd00 with size: 0.000305 MiB 00:04:52.906 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:52.906 10:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:52.906 10:22:33 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 3038201 00:04:52.906 10:22:33 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 3038201 ']' 00:04:52.906 10:22:33 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 3038201 00:04:52.906 10:22:33 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:52.906 10:22:33 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:52.906 10:22:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3038201 00:04:52.906 10:22:33 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:52.906 10:22:33 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:52.906 10:22:33 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3038201' 00:04:52.906 killing process with pid 3038201 00:04:52.906 10:22:33 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 3038201 00:04:52.906 10:22:33 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 3038201 00:04:53.166 00:04:53.166 real 0m1.002s 00:04:53.166 user 0m0.912s 00:04:53.166 sys 0m0.424s 00:04:53.166 10:22:33 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:53.166 10:22:33 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:53.166 ************************************ 00:04:53.166 END TEST dpdk_mem_utility 00:04:53.166 ************************************ 00:04:53.166 10:22:33 -- spdk/autotest.sh@168 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:53.166 10:22:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:53.166 10:22:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.166 10:22:33 -- common/autotest_common.sh@10 -- # set +x 00:04:53.166 ************************************ 00:04:53.166 START TEST event 00:04:53.166 ************************************ 00:04:53.166 10:22:33 event -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:04:53.166 * Looking for test storage... 00:04:53.425 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:04:53.426 10:22:33 event -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:53.426 10:22:33 event -- common/autotest_common.sh@1693 -- # lcov --version 00:04:53.426 10:22:33 event -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:53.426 10:22:33 event -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:53.426 10:22:33 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:53.426 10:22:33 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:53.426 10:22:33 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:53.426 10:22:33 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:53.426 10:22:33 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:53.426 10:22:33 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:53.426 10:22:33 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:53.426 10:22:33 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:53.426 10:22:33 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:53.426 10:22:33 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:53.426 10:22:33 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:53.426 10:22:33 event -- scripts/common.sh@344 -- # case "$op" in 00:04:53.426 10:22:33 event -- scripts/common.sh@345 -- # : 1 00:04:53.426 10:22:33 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:53.426 10:22:33 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:53.426 10:22:33 event -- scripts/common.sh@365 -- # decimal 1 00:04:53.426 10:22:33 event -- scripts/common.sh@353 -- # local d=1 00:04:53.426 10:22:33 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:53.426 10:22:33 event -- scripts/common.sh@355 -- # echo 1 00:04:53.426 10:22:33 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:53.426 10:22:33 event -- scripts/common.sh@366 -- # decimal 2 00:04:53.426 10:22:33 event -- scripts/common.sh@353 -- # local d=2 00:04:53.426 10:22:33 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:53.426 10:22:33 event -- scripts/common.sh@355 -- # echo 2 00:04:53.426 10:22:33 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:53.426 10:22:33 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:53.426 10:22:33 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:53.426 10:22:33 event -- scripts/common.sh@368 -- # return 0 00:04:53.426 10:22:33 event -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:53.426 10:22:33 event -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:53.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.426 --rc genhtml_branch_coverage=1 00:04:53.426 --rc genhtml_function_coverage=1 00:04:53.426 --rc genhtml_legend=1 00:04:53.426 --rc geninfo_all_blocks=1 00:04:53.426 --rc geninfo_unexecuted_blocks=1 00:04:53.426 00:04:53.426 ' 00:04:53.426 10:22:33 event -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:53.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.426 --rc genhtml_branch_coverage=1 00:04:53.426 --rc genhtml_function_coverage=1 00:04:53.426 --rc genhtml_legend=1 00:04:53.426 --rc geninfo_all_blocks=1 00:04:53.426 --rc geninfo_unexecuted_blocks=1 00:04:53.426 00:04:53.426 ' 00:04:53.426 10:22:33 event -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:53.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.426 --rc genhtml_branch_coverage=1 00:04:53.426 --rc genhtml_function_coverage=1 00:04:53.426 --rc genhtml_legend=1 00:04:53.426 --rc geninfo_all_blocks=1 00:04:53.426 --rc geninfo_unexecuted_blocks=1 00:04:53.426 00:04:53.426 ' 00:04:53.426 10:22:33 event -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:53.426 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:53.426 --rc genhtml_branch_coverage=1 00:04:53.426 --rc genhtml_function_coverage=1 00:04:53.426 --rc genhtml_legend=1 00:04:53.426 --rc geninfo_all_blocks=1 00:04:53.426 --rc geninfo_unexecuted_blocks=1 00:04:53.426 00:04:53.426 ' 00:04:53.426 10:22:33 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:04:53.426 10:22:33 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:53.426 10:22:33 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.426 10:22:33 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:53.426 10:22:33 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:53.426 10:22:33 event -- common/autotest_common.sh@10 -- # set +x 00:04:53.426 ************************************ 00:04:53.426 START TEST event_perf 00:04:53.426 ************************************ 00:04:53.426 10:22:34 event.event_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:53.426 Running I/O for 1 seconds...[2024-11-20 10:22:34.036870] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:53.426 [2024-11-20 10:22:34.036942] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3038491 ] 00:04:53.426 [2024-11-20 10:22:34.116284] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:53.685 [2024-11-20 10:22:34.160962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.685 [2024-11-20 10:22:34.161071] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:53.685 [2024-11-20 10:22:34.161156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.685 [2024-11-20 10:22:34.161157] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:54.622 Running I/O for 1 seconds... 00:04:54.622 lcore 0: 204899 00:04:54.622 lcore 1: 204897 00:04:54.622 lcore 2: 204899 00:04:54.622 lcore 3: 204900 00:04:54.622 done. 00:04:54.622 00:04:54.622 real 0m1.185s 00:04:54.622 user 0m4.104s 00:04:54.622 sys 0m0.079s 00:04:54.622 10:22:35 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:54.622 10:22:35 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:54.622 ************************************ 00:04:54.622 END TEST event_perf 00:04:54.622 ************************************ 00:04:54.622 10:22:35 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:54.622 10:22:35 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:54.622 10:22:35 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.622 10:22:35 event -- common/autotest_common.sh@10 -- # set +x 00:04:54.622 ************************************ 00:04:54.622 START TEST event_reactor 00:04:54.622 ************************************ 00:04:54.622 10:22:35 event.event_reactor -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:04:54.622 [2024-11-20 10:22:35.296342] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:54.622 [2024-11-20 10:22:35.296413] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3038741 ] 00:04:54.881 [2024-11-20 10:22:35.376831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:54.881 [2024-11-20 10:22:35.422400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.818 test_start 00:04:55.818 oneshot 00:04:55.818 tick 100 00:04:55.818 tick 100 00:04:55.818 tick 250 00:04:55.818 tick 100 00:04:55.818 tick 100 00:04:55.818 tick 100 00:04:55.818 tick 250 00:04:55.818 tick 500 00:04:55.818 tick 100 00:04:55.818 tick 100 00:04:55.818 tick 250 00:04:55.818 tick 100 00:04:55.818 tick 100 00:04:55.818 test_end 00:04:55.818 00:04:55.818 real 0m1.188s 00:04:55.818 user 0m1.100s 00:04:55.818 sys 0m0.085s 00:04:55.818 10:22:36 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.818 10:22:36 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:55.818 ************************************ 00:04:55.818 END TEST event_reactor 00:04:55.818 ************************************ 00:04:55.818 10:22:36 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:55.818 10:22:36 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:55.818 10:22:36 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:55.818 10:22:36 event -- common/autotest_common.sh@10 -- # set +x 00:04:55.819 ************************************ 00:04:55.819 START TEST event_reactor_perf 00:04:55.819 ************************************ 00:04:55.819 10:22:36 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:56.078 [2024-11-20 10:22:36.554786] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:56.078 [2024-11-20 10:22:36.554859] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3038914 ] 00:04:56.078 [2024-11-20 10:22:36.632451] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.078 [2024-11-20 10:22:36.672526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.015 test_start 00:04:57.015 test_end 00:04:57.015 Performance: 522201 events per second 00:04:57.015 00:04:57.015 real 0m1.175s 00:04:57.015 user 0m1.097s 00:04:57.015 sys 0m0.074s 00:04:57.015 10:22:37 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:57.015 10:22:37 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:57.015 ************************************ 00:04:57.015 END TEST event_reactor_perf 00:04:57.015 ************************************ 00:04:57.275 10:22:37 event -- event/event.sh@49 -- # uname -s 00:04:57.275 10:22:37 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:57.275 10:22:37 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:57.275 10:22:37 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.275 10:22:37 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.275 10:22:37 event -- common/autotest_common.sh@10 -- # set +x 00:04:57.275 ************************************ 00:04:57.275 START TEST event_scheduler 00:04:57.275 ************************************ 00:04:57.275 10:22:37 event.event_scheduler -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:04:57.275 * Looking for test storage... 00:04:57.275 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:04:57.275 10:22:37 event.event_scheduler -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:04:57.275 10:22:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # lcov --version 00:04:57.275 10:22:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:04:57.275 10:22:37 event.event_scheduler -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:57.275 10:22:37 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:57.275 10:22:37 event.event_scheduler -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:57.275 10:22:37 event.event_scheduler -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:04:57.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.275 --rc genhtml_branch_coverage=1 00:04:57.275 --rc genhtml_function_coverage=1 00:04:57.275 --rc genhtml_legend=1 00:04:57.275 --rc geninfo_all_blocks=1 00:04:57.275 --rc geninfo_unexecuted_blocks=1 00:04:57.275 00:04:57.275 ' 00:04:57.275 10:22:37 event.event_scheduler -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:04:57.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.275 --rc genhtml_branch_coverage=1 00:04:57.275 --rc genhtml_function_coverage=1 00:04:57.275 --rc genhtml_legend=1 00:04:57.275 --rc geninfo_all_blocks=1 00:04:57.275 --rc geninfo_unexecuted_blocks=1 00:04:57.275 00:04:57.275 ' 00:04:57.275 10:22:37 event.event_scheduler -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:04:57.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.275 --rc genhtml_branch_coverage=1 00:04:57.275 --rc genhtml_function_coverage=1 00:04:57.275 --rc genhtml_legend=1 00:04:57.275 --rc geninfo_all_blocks=1 00:04:57.275 --rc geninfo_unexecuted_blocks=1 00:04:57.275 00:04:57.275 ' 00:04:57.275 10:22:37 event.event_scheduler -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:04:57.275 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:57.275 --rc genhtml_branch_coverage=1 00:04:57.275 --rc genhtml_function_coverage=1 00:04:57.275 --rc genhtml_legend=1 00:04:57.275 --rc geninfo_all_blocks=1 00:04:57.275 --rc geninfo_unexecuted_blocks=1 00:04:57.275 00:04:57.275 ' 00:04:57.275 10:22:37 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:57.275 10:22:37 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=3039229 00:04:57.275 10:22:37 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:57.275 10:22:37 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:57.275 10:22:37 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 3039229 00:04:57.275 10:22:37 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 3039229 ']' 00:04:57.275 10:22:37 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:57.275 10:22:37 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:57.275 10:22:37 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:57.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:57.275 10:22:37 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:57.275 10:22:37 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.535 [2024-11-20 10:22:38.014435] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:04:57.535 [2024-11-20 10:22:38.014493] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3039229 ] 00:04:57.535 [2024-11-20 10:22:38.070746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:57.535 [2024-11-20 10:22:38.116248] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.535 [2024-11-20 10:22:38.116272] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:04:57.535 [2024-11-20 10:22:38.116359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:04:57.535 [2024-11-20 10:22:38.116359] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:04:57.535 10:22:38 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.535 10:22:38 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:57.535 10:22:38 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:57.535 10:22:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.535 10:22:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.536 [2024-11-20 10:22:38.177030] dpdk_governor.c: 178:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:04:57.536 [2024-11-20 10:22:38.177047] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:57.536 [2024-11-20 10:22:38.177056] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:57.536 [2024-11-20 10:22:38.177062] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:57.536 [2024-11-20 10:22:38.177067] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:57.536 10:22:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.536 10:22:38 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:57.536 10:22:38 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.536 10:22:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.536 [2024-11-20 10:22:38.249941] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:57.536 10:22:38 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.536 10:22:38 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:57.536 10:22:38 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:57.536 10:22:38 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:57.536 10:22:38 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:57.795 ************************************ 00:04:57.795 START TEST scheduler_create_thread 00:04:57.795 ************************************ 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.795 2 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.795 3 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.795 4 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.795 5 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.795 6 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.795 10:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.796 7 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.796 8 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.796 9 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.796 10 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:57.796 10:22:38 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:58.732 10:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:58.732 10:22:39 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:58.732 10:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:58.732 10:22:39 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:00.109 10:22:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:00.109 10:22:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:00.109 10:22:40 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:00.109 10:22:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:00.109 10:22:40 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.046 10:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:01.046 00:05:01.046 real 0m3.383s 00:05:01.046 user 0m0.024s 00:05:01.046 sys 0m0.006s 00:05:01.046 10:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.046 10:22:41 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:01.046 ************************************ 00:05:01.046 END TEST scheduler_create_thread 00:05:01.046 ************************************ 00:05:01.046 10:22:41 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:01.046 10:22:41 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 3039229 00:05:01.046 10:22:41 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 3039229 ']' 00:05:01.046 10:22:41 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 3039229 00:05:01.046 10:22:41 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:05:01.046 10:22:41 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:01.046 10:22:41 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3039229 00:05:01.046 10:22:41 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:01.046 10:22:41 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:01.046 10:22:41 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3039229' 00:05:01.046 killing process with pid 3039229 00:05:01.046 10:22:41 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 3039229 00:05:01.046 10:22:41 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 3039229 00:05:01.615 [2024-11-20 10:22:42.049995] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:01.615 00:05:01.615 real 0m4.464s 00:05:01.615 user 0m7.871s 00:05:01.615 sys 0m0.367s 00:05:01.615 10:22:42 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:01.615 10:22:42 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:01.615 ************************************ 00:05:01.615 END TEST event_scheduler 00:05:01.615 ************************************ 00:05:01.615 10:22:42 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:01.615 10:22:42 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:01.615 10:22:42 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:01.615 10:22:42 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:01.615 10:22:42 event -- common/autotest_common.sh@10 -- # set +x 00:05:01.615 ************************************ 00:05:01.615 START TEST app_repeat 00:05:01.615 ************************************ 00:05:01.615 10:22:42 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:05:01.615 10:22:42 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:01.615 10:22:42 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:01.615 10:22:42 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:01.615 10:22:42 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:01.616 10:22:42 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:01.616 10:22:42 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:01.616 10:22:42 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:01.616 10:22:42 event.app_repeat -- event/event.sh@19 -- # repeat_pid=3040023 00:05:01.616 10:22:42 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:01.616 10:22:42 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:01.616 10:22:42 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 3040023' 00:05:01.616 Process app_repeat pid: 3040023 00:05:01.616 10:22:42 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:01.616 10:22:42 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:01.616 spdk_app_start Round 0 00:05:01.616 10:22:42 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3040023 /var/tmp/spdk-nbd.sock 00:05:01.616 10:22:42 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3040023 ']' 00:05:01.616 10:22:42 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:01.616 10:22:42 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:01.875 10:22:42 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:01.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:01.875 10:22:42 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:01.875 10:22:42 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:01.875 [2024-11-20 10:22:42.366485] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:01.875 [2024-11-20 10:22:42.366539] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3040023 ] 00:05:01.875 [2024-11-20 10:22:42.442417] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:01.875 [2024-11-20 10:22:42.483018] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:01.875 [2024-11-20 10:22:42.483019] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:01.875 10:22:42 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:01.875 10:22:42 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:01.875 10:22:42 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.134 Malloc0 00:05:02.134 10:22:42 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.393 Malloc1 00:05:02.393 10:22:42 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.393 10:22:42 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.393 10:22:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.393 10:22:42 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:02.393 10:22:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.393 10:22:42 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:02.393 10:22:42 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:02.393 10:22:42 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.393 10:22:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:02.393 10:22:42 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:02.393 10:22:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:02.393 10:22:42 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:02.393 10:22:42 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:02.393 10:22:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:02.393 10:22:42 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.393 10:22:42 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:02.652 /dev/nbd0 00:05:02.652 10:22:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:02.652 10:22:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:02.652 10:22:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:02.652 10:22:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:02.652 10:22:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:02.652 10:22:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:02.652 10:22:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:02.652 10:22:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:02.652 10:22:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:02.652 10:22:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:02.652 10:22:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.652 1+0 records in 00:05:02.652 1+0 records out 00:05:02.652 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000174358 s, 23.5 MB/s 00:05:02.652 10:22:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.652 10:22:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:02.653 10:22:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.653 10:22:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:02.653 10:22:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:02.653 10:22:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.653 10:22:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.653 10:22:43 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:02.912 /dev/nbd1 00:05:02.912 10:22:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:02.912 10:22:43 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:02.912 10:22:43 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:02.912 10:22:43 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:02.912 10:22:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:02.912 10:22:43 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:02.912 10:22:43 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:02.912 10:22:43 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:02.912 10:22:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:02.912 10:22:43 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:02.912 10:22:43 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:02.912 1+0 records in 00:05:02.912 1+0 records out 00:05:02.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00019501 s, 21.0 MB/s 00:05:02.912 10:22:43 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.912 10:22:43 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:02.912 10:22:43 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:02.912 10:22:43 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:02.912 10:22:43 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:02.912 10:22:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:02.912 10:22:43 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:02.912 10:22:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:02.912 10:22:43 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:02.912 10:22:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.171 10:22:43 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:03.171 { 00:05:03.171 "nbd_device": "/dev/nbd0", 00:05:03.171 "bdev_name": "Malloc0" 00:05:03.171 }, 00:05:03.171 { 00:05:03.171 "nbd_device": "/dev/nbd1", 00:05:03.172 "bdev_name": "Malloc1" 00:05:03.172 } 00:05:03.172 ]' 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:03.172 { 00:05:03.172 "nbd_device": "/dev/nbd0", 00:05:03.172 "bdev_name": "Malloc0" 00:05:03.172 }, 00:05:03.172 { 00:05:03.172 "nbd_device": "/dev/nbd1", 00:05:03.172 "bdev_name": "Malloc1" 00:05:03.172 } 00:05:03.172 ]' 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:03.172 /dev/nbd1' 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:03.172 /dev/nbd1' 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:03.172 256+0 records in 00:05:03.172 256+0 records out 00:05:03.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0106717 s, 98.3 MB/s 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:03.172 256+0 records in 00:05:03.172 256+0 records out 00:05:03.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137953 s, 76.0 MB/s 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:03.172 256+0 records in 00:05:03.172 256+0 records out 00:05:03.172 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0146663 s, 71.5 MB/s 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.172 10:22:43 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:03.431 10:22:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:03.431 10:22:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:03.431 10:22:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:03.431 10:22:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.431 10:22:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.431 10:22:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:03.431 10:22:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.431 10:22:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.431 10:22:44 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.431 10:22:44 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:03.691 10:22:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:03.691 10:22:44 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:03.691 10:22:44 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:03.691 10:22:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:03.691 10:22:44 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:03.691 10:22:44 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:03.691 10:22:44 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:03.691 10:22:44 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:03.691 10:22:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.691 10:22:44 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.691 10:22:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.949 10:22:44 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:03.949 10:22:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:03.949 10:22:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.949 10:22:44 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:03.949 10:22:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.949 10:22:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:03.949 10:22:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:03.949 10:22:44 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:03.949 10:22:44 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:03.949 10:22:44 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:03.949 10:22:44 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:03.949 10:22:44 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:03.949 10:22:44 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:04.208 10:22:44 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:04.208 [2024-11-20 10:22:44.856903] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:04.208 [2024-11-20 10:22:44.893008] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:04.208 [2024-11-20 10:22:44.893010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:04.208 [2024-11-20 10:22:44.933293] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:04.208 [2024-11-20 10:22:44.933330] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:07.497 10:22:47 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:07.497 10:22:47 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:07.497 spdk_app_start Round 1 00:05:07.497 10:22:47 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3040023 /var/tmp/spdk-nbd.sock 00:05:07.497 10:22:47 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3040023 ']' 00:05:07.497 10:22:47 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:07.497 10:22:47 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:07.497 10:22:47 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:07.497 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:07.497 10:22:47 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:07.497 10:22:47 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:07.497 10:22:47 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:07.497 10:22:47 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:07.497 10:22:47 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.497 Malloc0 00:05:07.497 10:22:48 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:07.756 Malloc1 00:05:07.756 10:22:48 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.756 10:22:48 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.756 10:22:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.756 10:22:48 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:07.756 10:22:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.756 10:22:48 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:07.756 10:22:48 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:07.756 10:22:48 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:07.756 10:22:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:07.756 10:22:48 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:07.756 10:22:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:07.756 10:22:48 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:07.756 10:22:48 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:07.756 10:22:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:07.756 10:22:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:07.756 10:22:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:08.014 /dev/nbd0 00:05:08.014 10:22:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:08.014 10:22:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:08.014 10:22:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:08.014 10:22:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:08.014 10:22:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:08.014 10:22:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:08.014 10:22:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:08.014 10:22:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:08.014 10:22:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:08.014 10:22:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:08.014 10:22:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.014 1+0 records in 00:05:08.014 1+0 records out 00:05:08.014 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000203177 s, 20.2 MB/s 00:05:08.014 10:22:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:08.014 10:22:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:08.014 10:22:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:08.014 10:22:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:08.014 10:22:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:08.014 10:22:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.014 10:22:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.014 10:22:48 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:08.273 /dev/nbd1 00:05:08.273 10:22:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:08.273 10:22:48 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:08.273 10:22:48 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:08.273 10:22:48 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:08.273 10:22:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:08.273 10:22:48 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:08.273 10:22:48 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:08.273 10:22:48 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:08.273 10:22:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:08.273 10:22:48 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:08.273 10:22:48 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.273 1+0 records in 00:05:08.273 1+0 records out 00:05:08.273 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000185886 s, 22.0 MB/s 00:05:08.273 10:22:48 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:08.273 10:22:48 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:08.273 10:22:48 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:08.273 10:22:48 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:08.273 10:22:48 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:08.273 10:22:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.273 10:22:48 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.273 10:22:48 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.273 10:22:48 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.273 10:22:48 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:08.532 10:22:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:08.532 { 00:05:08.532 "nbd_device": "/dev/nbd0", 00:05:08.532 "bdev_name": "Malloc0" 00:05:08.532 }, 00:05:08.532 { 00:05:08.532 "nbd_device": "/dev/nbd1", 00:05:08.532 "bdev_name": "Malloc1" 00:05:08.532 } 00:05:08.532 ]' 00:05:08.532 10:22:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:08.532 { 00:05:08.532 "nbd_device": "/dev/nbd0", 00:05:08.532 "bdev_name": "Malloc0" 00:05:08.532 }, 00:05:08.532 { 00:05:08.532 "nbd_device": "/dev/nbd1", 00:05:08.532 "bdev_name": "Malloc1" 00:05:08.532 } 00:05:08.532 ]' 00:05:08.532 10:22:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:08.532 10:22:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:08.532 /dev/nbd1' 00:05:08.532 10:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:08.532 /dev/nbd1' 00:05:08.532 10:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:08.532 10:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:08.532 10:22:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:08.532 10:22:49 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:08.532 10:22:49 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:08.533 256+0 records in 00:05:08.533 256+0 records out 00:05:08.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103078 s, 102 MB/s 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:08.533 256+0 records in 00:05:08.533 256+0 records out 00:05:08.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0142939 s, 73.4 MB/s 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:08.533 256+0 records in 00:05:08.533 256+0 records out 00:05:08.533 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147468 s, 71.1 MB/s 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.533 10:22:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:08.801 10:22:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:08.801 10:22:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:08.801 10:22:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:08.801 10:22:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:08.801 10:22:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:08.801 10:22:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:08.801 10:22:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:08.801 10:22:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:08.801 10:22:49 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:08.801 10:22:49 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:09.080 10:22:49 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:09.394 10:22:49 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:09.394 10:22:49 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:09.394 10:22:49 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:09.394 10:22:50 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:09.656 [2024-11-20 10:22:50.152534] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:09.656 [2024-11-20 10:22:50.190962] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:09.656 [2024-11-20 10:22:50.190963] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.656 [2024-11-20 10:22:50.232589] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:09.656 [2024-11-20 10:22:50.232629] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:12.944 10:22:53 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:12.944 10:22:53 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:12.944 spdk_app_start Round 2 00:05:12.944 10:22:53 event.app_repeat -- event/event.sh@25 -- # waitforlisten 3040023 /var/tmp/spdk-nbd.sock 00:05:12.944 10:22:53 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3040023 ']' 00:05:12.944 10:22:53 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:12.944 10:22:53 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:12.944 10:22:53 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:12.944 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:12.944 10:22:53 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:12.944 10:22:53 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:12.944 10:22:53 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:12.944 10:22:53 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:12.944 10:22:53 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.944 Malloc0 00:05:12.944 10:22:53 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:12.944 Malloc1 00:05:12.945 10:22:53 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.945 10:22:53 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.945 10:22:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.945 10:22:53 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:12.945 10:22:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.945 10:22:53 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:12.945 10:22:53 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:12.945 10:22:53 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:12.945 10:22:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:12.945 10:22:53 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:12.945 10:22:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:12.945 10:22:53 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:12.945 10:22:53 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:12.945 10:22:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:12.945 10:22:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:12.945 10:22:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:13.217 /dev/nbd0 00:05:13.217 10:22:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:13.217 10:22:53 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:13.217 10:22:53 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:13.217 10:22:53 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:13.217 10:22:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:13.217 10:22:53 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:13.217 10:22:53 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:13.217 10:22:53 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:13.217 10:22:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:13.217 10:22:53 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:13.217 10:22:53 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.217 1+0 records in 00:05:13.217 1+0 records out 00:05:13.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000227269 s, 18.0 MB/s 00:05:13.217 10:22:53 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.217 10:22:53 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:13.217 10:22:53 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.217 10:22:53 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:13.217 10:22:53 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:13.217 10:22:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.217 10:22:53 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.217 10:22:53 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:13.477 /dev/nbd1 00:05:13.478 10:22:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:13.478 10:22:54 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:13.478 10:22:54 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:13.478 10:22:54 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:13.478 10:22:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:13.478 10:22:54 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:13.478 10:22:54 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:13.478 10:22:54 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:13.478 10:22:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:13.478 10:22:54 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:13.478 10:22:54 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:13.478 1+0 records in 00:05:13.478 1+0 records out 00:05:13.478 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000226353 s, 18.1 MB/s 00:05:13.478 10:22:54 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.478 10:22:54 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:13.478 10:22:54 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:13.478 10:22:54 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:13.478 10:22:54 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:13.478 10:22:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:13.478 10:22:54 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:13.478 10:22:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:13.478 10:22:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.478 10:22:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:13.737 { 00:05:13.737 "nbd_device": "/dev/nbd0", 00:05:13.737 "bdev_name": "Malloc0" 00:05:13.737 }, 00:05:13.737 { 00:05:13.737 "nbd_device": "/dev/nbd1", 00:05:13.737 "bdev_name": "Malloc1" 00:05:13.737 } 00:05:13.737 ]' 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:13.737 { 00:05:13.737 "nbd_device": "/dev/nbd0", 00:05:13.737 "bdev_name": "Malloc0" 00:05:13.737 }, 00:05:13.737 { 00:05:13.737 "nbd_device": "/dev/nbd1", 00:05:13.737 "bdev_name": "Malloc1" 00:05:13.737 } 00:05:13.737 ]' 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:13.737 /dev/nbd1' 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:13.737 /dev/nbd1' 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:13.737 256+0 records in 00:05:13.737 256+0 records out 00:05:13.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.010664 s, 98.3 MB/s 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:13.737 256+0 records in 00:05:13.737 256+0 records out 00:05:13.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0139406 s, 75.2 MB/s 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:13.737 256+0 records in 00:05:13.737 256+0 records out 00:05:13.737 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0144777 s, 72.4 MB/s 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.737 10:22:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:13.996 10:22:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:13.996 10:22:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:13.996 10:22:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:13.996 10:22:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:13.996 10:22:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:13.996 10:22:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:13.996 10:22:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:13.996 10:22:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:13.996 10:22:54 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:13.996 10:22:54 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:14.255 10:22:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:14.255 10:22:54 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:14.255 10:22:54 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:14.255 10:22:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:14.255 10:22:54 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:14.255 10:22:54 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:14.255 10:22:54 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:14.255 10:22:54 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:14.255 10:22:54 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:14.255 10:22:54 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:14.255 10:22:54 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:14.515 10:22:55 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:14.515 10:22:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:14.515 10:22:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:14.515 10:22:55 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:14.515 10:22:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:14.515 10:22:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:14.515 10:22:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:14.515 10:22:55 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:14.515 10:22:55 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:14.515 10:22:55 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:14.515 10:22:55 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:14.515 10:22:55 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:14.515 10:22:55 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:14.774 10:22:55 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:14.774 [2024-11-20 10:22:55.454007] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:14.774 [2024-11-20 10:22:55.490754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.774 [2024-11-20 10:22:55.490754] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.034 [2024-11-20 10:22:55.531519] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:15.034 [2024-11-20 10:22:55.531560] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:18.321 10:22:58 event.app_repeat -- event/event.sh@38 -- # waitforlisten 3040023 /var/tmp/spdk-nbd.sock 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 3040023 ']' 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:18.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:18.321 10:22:58 event.app_repeat -- event/event.sh@39 -- # killprocess 3040023 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 3040023 ']' 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 3040023 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3040023 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3040023' 00:05:18.321 killing process with pid 3040023 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@973 -- # kill 3040023 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@978 -- # wait 3040023 00:05:18.321 spdk_app_start is called in Round 0. 00:05:18.321 Shutdown signal received, stop current app iteration 00:05:18.321 Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 reinitialization... 00:05:18.321 spdk_app_start is called in Round 1. 00:05:18.321 Shutdown signal received, stop current app iteration 00:05:18.321 Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 reinitialization... 00:05:18.321 spdk_app_start is called in Round 2. 00:05:18.321 Shutdown signal received, stop current app iteration 00:05:18.321 Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 reinitialization... 00:05:18.321 spdk_app_start is called in Round 3. 00:05:18.321 Shutdown signal received, stop current app iteration 00:05:18.321 10:22:58 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:18.321 10:22:58 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:18.321 00:05:18.321 real 0m16.385s 00:05:18.321 user 0m35.990s 00:05:18.321 sys 0m2.566s 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:18.321 10:22:58 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:18.321 ************************************ 00:05:18.321 END TEST app_repeat 00:05:18.321 ************************************ 00:05:18.321 10:22:58 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:18.321 10:22:58 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:18.321 10:22:58 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.321 10:22:58 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.321 10:22:58 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.321 ************************************ 00:05:18.321 START TEST cpu_locks 00:05:18.321 ************************************ 00:05:18.321 10:22:58 event.cpu_locks -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:18.321 * Looking for test storage... 00:05:18.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:18.321 10:22:58 event.cpu_locks -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:18.321 10:22:58 event.cpu_locks -- common/autotest_common.sh@1693 -- # lcov --version 00:05:18.321 10:22:58 event.cpu_locks -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:18.321 10:22:58 event.cpu_locks -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:18.321 10:22:58 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:18.321 10:22:58 event.cpu_locks -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:18.321 10:22:58 event.cpu_locks -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:18.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.321 --rc genhtml_branch_coverage=1 00:05:18.321 --rc genhtml_function_coverage=1 00:05:18.321 --rc genhtml_legend=1 00:05:18.321 --rc geninfo_all_blocks=1 00:05:18.321 --rc geninfo_unexecuted_blocks=1 00:05:18.321 00:05:18.321 ' 00:05:18.321 10:22:58 event.cpu_locks -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:18.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.321 --rc genhtml_branch_coverage=1 00:05:18.321 --rc genhtml_function_coverage=1 00:05:18.321 --rc genhtml_legend=1 00:05:18.321 --rc geninfo_all_blocks=1 00:05:18.321 --rc geninfo_unexecuted_blocks=1 00:05:18.321 00:05:18.321 ' 00:05:18.321 10:22:58 event.cpu_locks -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:18.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.321 --rc genhtml_branch_coverage=1 00:05:18.321 --rc genhtml_function_coverage=1 00:05:18.321 --rc genhtml_legend=1 00:05:18.321 --rc geninfo_all_blocks=1 00:05:18.321 --rc geninfo_unexecuted_blocks=1 00:05:18.321 00:05:18.321 ' 00:05:18.321 10:22:58 event.cpu_locks -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:18.321 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:18.321 --rc genhtml_branch_coverage=1 00:05:18.321 --rc genhtml_function_coverage=1 00:05:18.321 --rc genhtml_legend=1 00:05:18.321 --rc geninfo_all_blocks=1 00:05:18.321 --rc geninfo_unexecuted_blocks=1 00:05:18.321 00:05:18.321 ' 00:05:18.321 10:22:58 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:18.321 10:22:58 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:18.321 10:22:58 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:18.321 10:22:58 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:18.321 10:22:58 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:18.321 10:22:58 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:18.321 10:22:58 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.321 ************************************ 00:05:18.321 START TEST default_locks 00:05:18.321 ************************************ 00:05:18.321 10:22:58 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:18.321 10:22:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=3043028 00:05:18.321 10:22:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:18.321 10:22:58 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 3043028 00:05:18.321 10:22:58 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3043028 ']' 00:05:18.321 10:22:58 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:18.321 10:22:58 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:18.321 10:22:58 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:18.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:18.322 10:22:58 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:18.322 10:22:58 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:18.580 [2024-11-20 10:22:59.047979] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:18.580 [2024-11-20 10:22:59.048026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3043028 ] 00:05:18.580 [2024-11-20 10:22:59.122728] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:18.580 [2024-11-20 10:22:59.161831] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:18.839 10:22:59 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:18.839 10:22:59 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:18.839 10:22:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 3043028 00:05:18.839 10:22:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 3043028 00:05:18.839 10:22:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:19.098 lslocks: write error 00:05:19.098 10:22:59 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 3043028 00:05:19.098 10:22:59 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 3043028 ']' 00:05:19.098 10:22:59 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 3043028 00:05:19.098 10:22:59 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:19.098 10:22:59 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:19.098 10:22:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3043028 00:05:19.358 10:22:59 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:19.358 10:22:59 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:19.358 10:22:59 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3043028' 00:05:19.358 killing process with pid 3043028 00:05:19.358 10:22:59 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 3043028 00:05:19.358 10:22:59 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 3043028 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 3043028 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3043028 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 3043028 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 3043028 ']' 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.617 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3043028) - No such process 00:05:19.617 ERROR: process (pid: 3043028) is no longer running 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:19.617 00:05:19.617 real 0m1.182s 00:05:19.617 user 0m1.146s 00:05:19.617 sys 0m0.536s 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.617 10:23:00 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.617 ************************************ 00:05:19.617 END TEST default_locks 00:05:19.617 ************************************ 00:05:19.617 10:23:00 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:19.617 10:23:00 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.617 10:23:00 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.617 10:23:00 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.617 ************************************ 00:05:19.617 START TEST default_locks_via_rpc 00:05:19.617 ************************************ 00:05:19.617 10:23:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:19.617 10:23:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=3043287 00:05:19.617 10:23:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 3043287 00:05:19.617 10:23:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.617 10:23:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3043287 ']' 00:05:19.617 10:23:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.617 10:23:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.617 10:23:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.617 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.617 10:23:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.617 10:23:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.618 [2024-11-20 10:23:00.299880] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:19.618 [2024-11-20 10:23:00.299925] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3043287 ] 00:05:19.877 [2024-11-20 10:23:00.375097] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.877 [2024-11-20 10:23:00.413069] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.135 10:23:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.136 10:23:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:20.136 10:23:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:20.136 10:23:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.136 10:23:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.136 10:23:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.136 10:23:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:20.136 10:23:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:20.136 10:23:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:20.136 10:23:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:20.136 10:23:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:20.136 10:23:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:20.136 10:23:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.136 10:23:00 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:20.136 10:23:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 3043287 00:05:20.136 10:23:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 3043287 00:05:20.136 10:23:00 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:20.704 10:23:01 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 3043287 00:05:20.704 10:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 3043287 ']' 00:05:20.704 10:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 3043287 00:05:20.704 10:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:20.704 10:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:20.704 10:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3043287 00:05:20.704 10:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:20.704 10:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:20.704 10:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3043287' 00:05:20.704 killing process with pid 3043287 00:05:20.704 10:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 3043287 00:05:20.704 10:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 3043287 00:05:20.963 00:05:20.963 real 0m1.255s 00:05:20.963 user 0m1.219s 00:05:20.963 sys 0m0.552s 00:05:20.963 10:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:20.963 10:23:01 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:20.963 ************************************ 00:05:20.963 END TEST default_locks_via_rpc 00:05:20.963 ************************************ 00:05:20.963 10:23:01 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:20.963 10:23:01 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:20.963 10:23:01 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:20.963 10:23:01 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:20.963 ************************************ 00:05:20.963 START TEST non_locking_app_on_locked_coremask 00:05:20.963 ************************************ 00:05:20.964 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:20.964 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=3043541 00:05:20.964 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:20.964 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 3043541 /var/tmp/spdk.sock 00:05:20.964 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3043541 ']' 00:05:20.964 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:20.964 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.964 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:20.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:20.964 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.964 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.964 [2024-11-20 10:23:01.618179] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:20.964 [2024-11-20 10:23:01.618228] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3043541 ] 00:05:21.223 [2024-11-20 10:23:01.692150] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.223 [2024-11-20 10:23:01.734824] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.223 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.223 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:21.223 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=3043552 00:05:21.223 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 3043552 /var/tmp/spdk2.sock 00:05:21.223 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:21.223 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3043552 ']' 00:05:21.223 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:21.223 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:21.223 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:21.223 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:21.223 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:21.223 10:23:01 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:21.482 [2024-11-20 10:23:01.997447] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:21.482 [2024-11-20 10:23:01.997491] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3043552 ] 00:05:21.482 [2024-11-20 10:23:02.088465] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:21.482 [2024-11-20 10:23:02.088494] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:21.482 [2024-11-20 10:23:02.176231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.418 10:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:22.419 10:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:22.419 10:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 3043541 00:05:22.419 10:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3043541 00:05:22.419 10:23:02 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.678 lslocks: write error 00:05:22.678 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 3043541 00:05:22.678 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3043541 ']' 00:05:22.678 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3043541 00:05:22.678 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:22.678 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.678 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3043541 00:05:22.678 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.678 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.678 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3043541' 00:05:22.678 killing process with pid 3043541 00:05:22.678 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3043541 00:05:22.678 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3043541 00:05:23.247 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 3043552 00:05:23.247 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3043552 ']' 00:05:23.247 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3043552 00:05:23.247 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:23.247 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:23.247 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3043552 00:05:23.247 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:23.247 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:23.247 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3043552' 00:05:23.247 killing process with pid 3043552 00:05:23.247 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3043552 00:05:23.247 10:23:03 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3043552 00:05:23.506 00:05:23.506 real 0m2.658s 00:05:23.506 user 0m2.809s 00:05:23.506 sys 0m0.869s 00:05:23.506 10:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:23.506 10:23:04 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.506 ************************************ 00:05:23.506 END TEST non_locking_app_on_locked_coremask 00:05:23.506 ************************************ 00:05:23.766 10:23:04 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:23.766 10:23:04 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:23.766 10:23:04 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:23.766 10:23:04 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:23.766 ************************************ 00:05:23.766 START TEST locking_app_on_unlocked_coremask 00:05:23.766 ************************************ 00:05:23.766 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:23.766 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=3044038 00:05:23.766 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 3044038 /var/tmp/spdk.sock 00:05:23.766 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:23.766 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3044038 ']' 00:05:23.766 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:23.766 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:23.766 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:23.766 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:23.766 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:23.766 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:23.766 [2024-11-20 10:23:04.347025] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:23.766 [2024-11-20 10:23:04.347069] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3044038 ] 00:05:23.766 [2024-11-20 10:23:04.422133] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:23.766 [2024-11-20 10:23:04.422160] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:23.766 [2024-11-20 10:23:04.462395] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.026 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.026 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:24.026 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=3044052 00:05:24.026 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 3044052 /var/tmp/spdk2.sock 00:05:24.026 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:24.026 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3044052 ']' 00:05:24.026 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:24.026 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:24.026 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:24.026 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:24.026 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:24.026 10:23:04 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:24.026 [2024-11-20 10:23:04.731868] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:24.026 [2024-11-20 10:23:04.731915] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3044052 ] 00:05:24.285 [2024-11-20 10:23:04.821379] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:24.285 [2024-11-20 10:23:04.901231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:24.852 10:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:24.852 10:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:24.852 10:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 3044052 00:05:24.852 10:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3044052 00:05:24.852 10:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:25.421 lslocks: write error 00:05:25.421 10:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 3044038 00:05:25.421 10:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3044038 ']' 00:05:25.421 10:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3044038 00:05:25.421 10:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:25.421 10:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.421 10:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3044038 00:05:25.421 10:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.421 10:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.421 10:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3044038' 00:05:25.421 killing process with pid 3044038 00:05:25.421 10:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3044038 00:05:25.421 10:23:05 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3044038 00:05:25.990 10:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 3044052 00:05:25.990 10:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3044052 ']' 00:05:25.990 10:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 3044052 00:05:25.990 10:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:25.990 10:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:25.990 10:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3044052 00:05:25.990 10:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:25.990 10:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:25.990 10:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3044052' 00:05:25.990 killing process with pid 3044052 00:05:25.990 10:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 3044052 00:05:25.990 10:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 3044052 00:05:26.249 00:05:26.249 real 0m2.587s 00:05:26.249 user 0m2.753s 00:05:26.249 sys 0m0.823s 00:05:26.249 10:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:26.249 10:23:06 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.249 ************************************ 00:05:26.249 END TEST locking_app_on_unlocked_coremask 00:05:26.249 ************************************ 00:05:26.249 10:23:06 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:26.249 10:23:06 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:26.250 10:23:06 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:26.250 10:23:06 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:26.250 ************************************ 00:05:26.250 START TEST locking_app_on_locked_coremask 00:05:26.250 ************************************ 00:05:26.250 10:23:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:26.250 10:23:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=3044543 00:05:26.250 10:23:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 3044543 /var/tmp/spdk.sock 00:05:26.250 10:23:06 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:26.250 10:23:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3044543 ']' 00:05:26.250 10:23:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:26.250 10:23:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.250 10:23:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:26.250 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:26.250 10:23:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.250 10:23:06 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.509 [2024-11-20 10:23:07.005744] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:26.509 [2024-11-20 10:23:07.005786] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3044543 ] 00:05:26.509 [2024-11-20 10:23:07.079149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.509 [2024-11-20 10:23:07.117333] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.447 10:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.447 10:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:27.447 10:23:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=3044559 00:05:27.447 10:23:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 3044559 /var/tmp/spdk2.sock 00:05:27.447 10:23:07 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:27.447 10:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:27.447 10:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3044559 /var/tmp/spdk2.sock 00:05:27.447 10:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:27.447 10:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.447 10:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:27.447 10:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:27.447 10:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3044559 /var/tmp/spdk2.sock 00:05:27.447 10:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 3044559 ']' 00:05:27.447 10:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:27.447 10:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:27.447 10:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:27.447 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:27.447 10:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:27.447 10:23:07 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:27.447 [2024-11-20 10:23:07.881360] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:27.447 [2024-11-20 10:23:07.881410] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3044559 ] 00:05:27.447 [2024-11-20 10:23:07.974550] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 3044543 has claimed it. 00:05:27.447 [2024-11-20 10:23:07.974591] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:28.015 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3044559) - No such process 00:05:28.015 ERROR: process (pid: 3044559) is no longer running 00:05:28.015 10:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:28.015 10:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:28.015 10:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:28.015 10:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:28.015 10:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:28.015 10:23:08 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:28.015 10:23:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 3044543 00:05:28.015 10:23:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 3044543 00:05:28.015 10:23:08 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.590 lslocks: write error 00:05:28.590 10:23:09 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 3044543 00:05:28.590 10:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 3044543 ']' 00:05:28.590 10:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 3044543 00:05:28.590 10:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:28.590 10:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.590 10:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3044543 00:05:28.590 10:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.591 10:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.591 10:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3044543' 00:05:28.591 killing process with pid 3044543 00:05:28.591 10:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 3044543 00:05:28.591 10:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 3044543 00:05:28.853 00:05:28.853 real 0m2.499s 00:05:28.853 user 0m2.795s 00:05:28.853 sys 0m0.685s 00:05:28.853 10:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:28.853 10:23:09 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.853 ************************************ 00:05:28.853 END TEST locking_app_on_locked_coremask 00:05:28.853 ************************************ 00:05:28.853 10:23:09 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:28.853 10:23:09 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:28.853 10:23:09 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:28.853 10:23:09 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:28.853 ************************************ 00:05:28.853 START TEST locking_overlapped_coremask 00:05:28.853 ************************************ 00:05:28.854 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:28.854 10:23:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=3044943 00:05:28.854 10:23:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 3044943 /var/tmp/spdk.sock 00:05:28.854 10:23:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:28.854 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3044943 ']' 00:05:28.854 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:28.854 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:28.854 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:28.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:28.854 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:28.854 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:28.854 [2024-11-20 10:23:09.572365] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:28.854 [2024-11-20 10:23:09.572406] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3044943 ] 00:05:29.112 [2024-11-20 10:23:09.648260] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:29.112 [2024-11-20 10:23:09.692926] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.112 [2024-11-20 10:23:09.693035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.112 [2024-11-20 10:23:09.693035] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:29.371 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.371 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:29.371 10:23:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=3045045 00:05:29.371 10:23:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 3045045 /var/tmp/spdk2.sock 00:05:29.371 10:23:09 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:29.371 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:29.371 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 3045045 /var/tmp/spdk2.sock 00:05:29.371 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:29.371 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.371 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:29.371 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:29.371 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 3045045 /var/tmp/spdk2.sock 00:05:29.371 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 3045045 ']' 00:05:29.371 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:29.371 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:29.371 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:29.371 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:29.371 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:29.371 10:23:09 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:29.371 [2024-11-20 10:23:09.960872] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:29.371 [2024-11-20 10:23:09.960922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3045045 ] 00:05:29.371 [2024-11-20 10:23:10.053182] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3044943 has claimed it. 00:05:29.371 [2024-11-20 10:23:10.053230] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:29.939 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 850: kill: (3045045) - No such process 00:05:29.939 ERROR: process (pid: 3045045) is no longer running 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 3044943 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 3044943 ']' 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 3044943 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3044943 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3044943' 00:05:29.939 killing process with pid 3044943 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 3044943 00:05:29.939 10:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 3044943 00:05:30.507 00:05:30.507 real 0m1.439s 00:05:30.507 user 0m3.964s 00:05:30.507 sys 0m0.414s 00:05:30.507 10:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:30.507 10:23:10 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:30.507 ************************************ 00:05:30.507 END TEST locking_overlapped_coremask 00:05:30.507 ************************************ 00:05:30.507 10:23:10 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:30.507 10:23:10 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:30.507 10:23:10 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:30.507 10:23:10 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:30.507 ************************************ 00:05:30.507 START TEST locking_overlapped_coremask_via_rpc 00:05:30.507 ************************************ 00:05:30.507 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:30.507 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=3045294 00:05:30.507 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 3045294 /var/tmp/spdk.sock 00:05:30.507 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:30.507 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3045294 ']' 00:05:30.507 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:30.507 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.507 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:30.507 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:30.507 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.507 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.507 [2024-11-20 10:23:11.079844] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:30.507 [2024-11-20 10:23:11.079886] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3045294 ] 00:05:30.507 [2024-11-20 10:23:11.154984] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:30.507 [2024-11-20 10:23:11.155008] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:30.507 [2024-11-20 10:23:11.199425] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:30.507 [2024-11-20 10:23:11.199530] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:30.507 [2024-11-20 10:23:11.199531] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:30.767 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:30.767 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:30.767 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=3045308 00:05:30.767 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 3045308 /var/tmp/spdk2.sock 00:05:30.767 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:30.767 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3045308 ']' 00:05:30.767 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:30.767 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:30.767 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:30.767 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:30.767 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:30.767 10:23:11 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:30.767 [2024-11-20 10:23:11.459108] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:30.767 [2024-11-20 10:23:11.459156] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3045308 ] 00:05:31.026 [2024-11-20 10:23:11.551739] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:31.026 [2024-11-20 10:23:11.551764] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:31.026 [2024-11-20 10:23:11.639241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:31.026 [2024-11-20 10:23:11.639295] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:31.026 [2024-11-20 10:23:11.639296] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:05:31.594 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.594 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:31.594 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:31.594 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.594 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.594 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:31.594 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.594 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:31.594 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.594 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:31.594 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.594 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:31.594 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:31.595 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:31.595 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:31.595 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.595 [2024-11-20 10:23:12.304278] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 3045294 has claimed it. 00:05:31.595 request: 00:05:31.595 { 00:05:31.595 "method": "framework_enable_cpumask_locks", 00:05:31.595 "req_id": 1 00:05:31.595 } 00:05:31.595 Got JSON-RPC error response 00:05:31.595 response: 00:05:31.595 { 00:05:31.595 "code": -32603, 00:05:31.595 "message": "Failed to claim CPU core: 2" 00:05:31.595 } 00:05:31.595 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:31.595 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:31.595 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:31.595 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:31.595 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:31.595 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 3045294 /var/tmp/spdk.sock 00:05:31.595 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3045294 ']' 00:05:31.595 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.595 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.595 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.595 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.595 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:31.853 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:31.853 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:31.853 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 3045308 /var/tmp/spdk2.sock 00:05:31.853 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 3045308 ']' 00:05:31.853 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:31.853 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.853 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:31.853 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:31.853 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.854 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.113 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.113 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:32.113 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:32.113 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:32.113 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:32.113 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:32.113 00:05:32.113 real 0m1.693s 00:05:32.113 user 0m0.806s 00:05:32.113 sys 0m0.143s 00:05:32.113 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.113 10:23:12 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:32.113 ************************************ 00:05:32.113 END TEST locking_overlapped_coremask_via_rpc 00:05:32.113 ************************************ 00:05:32.113 10:23:12 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:32.113 10:23:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3045294 ]] 00:05:32.113 10:23:12 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3045294 00:05:32.113 10:23:12 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3045294 ']' 00:05:32.113 10:23:12 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3045294 00:05:32.113 10:23:12 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:32.113 10:23:12 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.113 10:23:12 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3045294 00:05:32.113 10:23:12 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:32.113 10:23:12 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:32.113 10:23:12 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3045294' 00:05:32.113 killing process with pid 3045294 00:05:32.113 10:23:12 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3045294 00:05:32.113 10:23:12 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3045294 00:05:32.681 10:23:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3045308 ]] 00:05:32.681 10:23:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3045308 00:05:32.681 10:23:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3045308 ']' 00:05:32.681 10:23:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3045308 00:05:32.681 10:23:13 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:32.681 10:23:13 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:32.681 10:23:13 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3045308 00:05:32.681 10:23:13 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:32.681 10:23:13 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:32.681 10:23:13 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3045308' 00:05:32.681 killing process with pid 3045308 00:05:32.681 10:23:13 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 3045308 00:05:32.681 10:23:13 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 3045308 00:05:32.941 10:23:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:32.941 10:23:13 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:32.941 10:23:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 3045294 ]] 00:05:32.941 10:23:13 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 3045294 00:05:32.941 10:23:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3045294 ']' 00:05:32.941 10:23:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3045294 00:05:32.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3045294) - No such process 00:05:32.941 10:23:13 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3045294 is not found' 00:05:32.941 Process with pid 3045294 is not found 00:05:32.941 10:23:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 3045308 ]] 00:05:32.941 10:23:13 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 3045308 00:05:32.941 10:23:13 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 3045308 ']' 00:05:32.941 10:23:13 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 3045308 00:05:32.941 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3045308) - No such process 00:05:32.941 10:23:13 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 3045308 is not found' 00:05:32.941 Process with pid 3045308 is not found 00:05:32.941 10:23:13 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:32.941 00:05:32.941 real 0m14.687s 00:05:32.941 user 0m25.135s 00:05:32.941 sys 0m4.969s 00:05:32.941 10:23:13 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.941 10:23:13 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:32.941 ************************************ 00:05:32.941 END TEST cpu_locks 00:05:32.941 ************************************ 00:05:32.941 00:05:32.941 real 0m39.704s 00:05:32.941 user 1m15.564s 00:05:32.941 sys 0m8.532s 00:05:32.941 10:23:13 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:32.941 10:23:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:32.941 ************************************ 00:05:32.941 END TEST event 00:05:32.941 ************************************ 00:05:32.941 10:23:13 -- spdk/autotest.sh@169 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:32.941 10:23:13 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:32.941 10:23:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:32.941 10:23:13 -- common/autotest_common.sh@10 -- # set +x 00:05:32.941 ************************************ 00:05:32.941 START TEST thread 00:05:32.941 ************************************ 00:05:32.941 10:23:13 thread -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:32.941 * Looking for test storage... 00:05:32.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:32.942 10:23:13 thread -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:33.201 10:23:13 thread -- common/autotest_common.sh@1693 -- # lcov --version 00:05:33.201 10:23:13 thread -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:33.201 10:23:13 thread -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:33.201 10:23:13 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:33.201 10:23:13 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:33.201 10:23:13 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:33.201 10:23:13 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:33.201 10:23:13 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:33.201 10:23:13 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:33.201 10:23:13 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:33.201 10:23:13 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:33.201 10:23:13 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:33.201 10:23:13 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:33.201 10:23:13 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:33.201 10:23:13 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:33.201 10:23:13 thread -- scripts/common.sh@345 -- # : 1 00:05:33.201 10:23:13 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:33.201 10:23:13 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:33.201 10:23:13 thread -- scripts/common.sh@365 -- # decimal 1 00:05:33.201 10:23:13 thread -- scripts/common.sh@353 -- # local d=1 00:05:33.201 10:23:13 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:33.201 10:23:13 thread -- scripts/common.sh@355 -- # echo 1 00:05:33.201 10:23:13 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:33.201 10:23:13 thread -- scripts/common.sh@366 -- # decimal 2 00:05:33.201 10:23:13 thread -- scripts/common.sh@353 -- # local d=2 00:05:33.201 10:23:13 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:33.201 10:23:13 thread -- scripts/common.sh@355 -- # echo 2 00:05:33.201 10:23:13 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:33.201 10:23:13 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:33.201 10:23:13 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:33.201 10:23:13 thread -- scripts/common.sh@368 -- # return 0 00:05:33.201 10:23:13 thread -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:33.201 10:23:13 thread -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:33.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.201 --rc genhtml_branch_coverage=1 00:05:33.201 --rc genhtml_function_coverage=1 00:05:33.201 --rc genhtml_legend=1 00:05:33.201 --rc geninfo_all_blocks=1 00:05:33.201 --rc geninfo_unexecuted_blocks=1 00:05:33.201 00:05:33.201 ' 00:05:33.201 10:23:13 thread -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:33.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.201 --rc genhtml_branch_coverage=1 00:05:33.201 --rc genhtml_function_coverage=1 00:05:33.201 --rc genhtml_legend=1 00:05:33.201 --rc geninfo_all_blocks=1 00:05:33.201 --rc geninfo_unexecuted_blocks=1 00:05:33.201 00:05:33.201 ' 00:05:33.201 10:23:13 thread -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:33.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.201 --rc genhtml_branch_coverage=1 00:05:33.201 --rc genhtml_function_coverage=1 00:05:33.201 --rc genhtml_legend=1 00:05:33.201 --rc geninfo_all_blocks=1 00:05:33.201 --rc geninfo_unexecuted_blocks=1 00:05:33.201 00:05:33.201 ' 00:05:33.201 10:23:13 thread -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:33.201 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:33.201 --rc genhtml_branch_coverage=1 00:05:33.201 --rc genhtml_function_coverage=1 00:05:33.201 --rc genhtml_legend=1 00:05:33.201 --rc geninfo_all_blocks=1 00:05:33.201 --rc geninfo_unexecuted_blocks=1 00:05:33.201 00:05:33.201 ' 00:05:33.201 10:23:13 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:33.201 10:23:13 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:33.201 10:23:13 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:33.201 10:23:13 thread -- common/autotest_common.sh@10 -- # set +x 00:05:33.201 ************************************ 00:05:33.201 START TEST thread_poller_perf 00:05:33.201 ************************************ 00:05:33.201 10:23:13 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:33.201 [2024-11-20 10:23:13.797631] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:33.201 [2024-11-20 10:23:13.797699] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3045849 ] 00:05:33.201 [2024-11-20 10:23:13.874071] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:33.201 [2024-11-20 10:23:13.913244] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:33.201 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:34.578 [2024-11-20T09:23:15.309Z] ====================================== 00:05:34.578 [2024-11-20T09:23:15.309Z] busy:2105792048 (cyc) 00:05:34.578 [2024-11-20T09:23:15.309Z] total_run_count: 424000 00:05:34.578 [2024-11-20T09:23:15.309Z] tsc_hz: 2100000000 (cyc) 00:05:34.578 [2024-11-20T09:23:15.309Z] ====================================== 00:05:34.578 [2024-11-20T09:23:15.309Z] poller_cost: 4966 (cyc), 2364 (nsec) 00:05:34.578 00:05:34.578 real 0m1.179s 00:05:34.578 user 0m1.106s 00:05:34.578 sys 0m0.070s 00:05:34.578 10:23:14 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.578 10:23:14 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:34.578 ************************************ 00:05:34.578 END TEST thread_poller_perf 00:05:34.578 ************************************ 00:05:34.578 10:23:14 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:34.578 10:23:14 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:34.578 10:23:14 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.578 10:23:14 thread -- common/autotest_common.sh@10 -- # set +x 00:05:34.578 ************************************ 00:05:34.578 START TEST thread_poller_perf 00:05:34.578 ************************************ 00:05:34.578 10:23:15 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:34.578 [2024-11-20 10:23:15.044677] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:34.578 [2024-11-20 10:23:15.044733] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3046025 ] 00:05:34.578 [2024-11-20 10:23:15.121187] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:34.578 [2024-11-20 10:23:15.161377] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.578 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:35.515 [2024-11-20T09:23:16.246Z] ====================================== 00:05:35.515 [2024-11-20T09:23:16.246Z] busy:2101330578 (cyc) 00:05:35.515 [2024-11-20T09:23:16.246Z] total_run_count: 5601000 00:05:35.515 [2024-11-20T09:23:16.246Z] tsc_hz: 2100000000 (cyc) 00:05:35.515 [2024-11-20T09:23:16.246Z] ====================================== 00:05:35.515 [2024-11-20T09:23:16.246Z] poller_cost: 375 (cyc), 178 (nsec) 00:05:35.515 00:05:35.515 real 0m1.175s 00:05:35.515 user 0m1.097s 00:05:35.515 sys 0m0.075s 00:05:35.515 10:23:16 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.515 10:23:16 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:35.515 ************************************ 00:05:35.515 END TEST thread_poller_perf 00:05:35.515 ************************************ 00:05:35.515 10:23:16 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:35.515 00:05:35.515 real 0m2.655s 00:05:35.515 user 0m2.353s 00:05:35.515 sys 0m0.319s 00:05:35.515 10:23:16 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:35.515 10:23:16 thread -- common/autotest_common.sh@10 -- # set +x 00:05:35.515 ************************************ 00:05:35.515 END TEST thread 00:05:35.515 ************************************ 00:05:35.774 10:23:16 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:35.774 10:23:16 -- spdk/autotest.sh@176 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:35.774 10:23:16 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:35.774 10:23:16 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:35.774 10:23:16 -- common/autotest_common.sh@10 -- # set +x 00:05:35.774 ************************************ 00:05:35.774 START TEST app_cmdline 00:05:35.774 ************************************ 00:05:35.774 10:23:16 app_cmdline -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:05:35.774 * Looking for test storage... 00:05:35.774 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:35.774 10:23:16 app_cmdline -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:35.774 10:23:16 app_cmdline -- common/autotest_common.sh@1693 -- # lcov --version 00:05:35.774 10:23:16 app_cmdline -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:35.774 10:23:16 app_cmdline -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:35.774 10:23:16 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:35.774 10:23:16 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:35.774 10:23:16 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:35.774 10:23:16 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:35.775 10:23:16 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:35.775 10:23:16 app_cmdline -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:35.775 10:23:16 app_cmdline -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:35.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.775 --rc genhtml_branch_coverage=1 00:05:35.775 --rc genhtml_function_coverage=1 00:05:35.775 --rc genhtml_legend=1 00:05:35.775 --rc geninfo_all_blocks=1 00:05:35.775 --rc geninfo_unexecuted_blocks=1 00:05:35.775 00:05:35.775 ' 00:05:35.775 10:23:16 app_cmdline -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:35.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.775 --rc genhtml_branch_coverage=1 00:05:35.775 --rc genhtml_function_coverage=1 00:05:35.775 --rc genhtml_legend=1 00:05:35.775 --rc geninfo_all_blocks=1 00:05:35.775 --rc geninfo_unexecuted_blocks=1 00:05:35.775 00:05:35.775 ' 00:05:35.775 10:23:16 app_cmdline -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:35.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.775 --rc genhtml_branch_coverage=1 00:05:35.775 --rc genhtml_function_coverage=1 00:05:35.775 --rc genhtml_legend=1 00:05:35.775 --rc geninfo_all_blocks=1 00:05:35.775 --rc geninfo_unexecuted_blocks=1 00:05:35.775 00:05:35.775 ' 00:05:35.775 10:23:16 app_cmdline -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:35.775 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:35.775 --rc genhtml_branch_coverage=1 00:05:35.775 --rc genhtml_function_coverage=1 00:05:35.775 --rc genhtml_legend=1 00:05:35.775 --rc geninfo_all_blocks=1 00:05:35.775 --rc geninfo_unexecuted_blocks=1 00:05:35.775 00:05:35.775 ' 00:05:35.775 10:23:16 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:35.775 10:23:16 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=3046355 00:05:35.775 10:23:16 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 3046355 00:05:35.775 10:23:16 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:35.775 10:23:16 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 3046355 ']' 00:05:35.775 10:23:16 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:35.775 10:23:16 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.775 10:23:16 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:35.775 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:35.775 10:23:16 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.775 10:23:16 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:36.034 [2024-11-20 10:23:16.528857] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:36.034 [2024-11-20 10:23:16.528910] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3046355 ] 00:05:36.034 [2024-11-20 10:23:16.604671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:36.034 [2024-11-20 10:23:16.645029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:05:36.293 10:23:16 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.293 10:23:16 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:36.293 10:23:16 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:05:36.551 { 00:05:36.551 "version": "SPDK v25.01-pre git sha1 097badaeb", 00:05:36.551 "fields": { 00:05:36.551 "major": 25, 00:05:36.551 "minor": 1, 00:05:36.551 "patch": 0, 00:05:36.551 "suffix": "-pre", 00:05:36.551 "commit": "097badaeb" 00:05:36.551 } 00:05:36.551 } 00:05:36.551 10:23:17 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:36.551 10:23:17 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:36.551 10:23:17 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:36.551 10:23:17 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:36.551 10:23:17 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:36.551 10:23:17 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:36.551 10:23:17 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:36.551 10:23:17 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:36.551 10:23:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:36.551 10:23:17 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:36.551 10:23:17 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:36.551 10:23:17 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:36.551 10:23:17 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:36.551 10:23:17 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:36.551 10:23:17 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:36.551 10:23:17 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:36.551 10:23:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.551 10:23:17 app_cmdline -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:36.551 10:23:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.551 10:23:17 app_cmdline -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:36.551 10:23:17 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:36.551 10:23:17 app_cmdline -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:36.551 10:23:17 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:05:36.551 10:23:17 app_cmdline -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:36.811 request: 00:05:36.811 { 00:05:36.811 "method": "env_dpdk_get_mem_stats", 00:05:36.811 "req_id": 1 00:05:36.811 } 00:05:36.811 Got JSON-RPC error response 00:05:36.811 response: 00:05:36.811 { 00:05:36.811 "code": -32601, 00:05:36.811 "message": "Method not found" 00:05:36.811 } 00:05:36.811 10:23:17 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:36.811 10:23:17 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.811 10:23:17 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.811 10:23:17 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.811 10:23:17 app_cmdline -- app/cmdline.sh@1 -- # killprocess 3046355 00:05:36.811 10:23:17 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 3046355 ']' 00:05:36.811 10:23:17 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 3046355 00:05:36.811 10:23:17 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:36.811 10:23:17 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.811 10:23:17 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3046355 00:05:36.811 10:23:17 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.811 10:23:17 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.811 10:23:17 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3046355' 00:05:36.811 killing process with pid 3046355 00:05:36.811 10:23:17 app_cmdline -- common/autotest_common.sh@973 -- # kill 3046355 00:05:36.811 10:23:17 app_cmdline -- common/autotest_common.sh@978 -- # wait 3046355 00:05:37.070 00:05:37.070 real 0m1.350s 00:05:37.070 user 0m1.571s 00:05:37.070 sys 0m0.462s 00:05:37.070 10:23:17 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.070 10:23:17 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:37.070 ************************************ 00:05:37.070 END TEST app_cmdline 00:05:37.070 ************************************ 00:05:37.070 10:23:17 -- spdk/autotest.sh@177 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:37.070 10:23:17 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.070 10:23:17 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.070 10:23:17 -- common/autotest_common.sh@10 -- # set +x 00:05:37.070 ************************************ 00:05:37.070 START TEST version 00:05:37.070 ************************************ 00:05:37.070 10:23:17 version -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:05:37.070 * Looking for test storage... 00:05:37.070 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:05:37.070 10:23:17 version -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.070 10:23:17 version -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.070 10:23:17 version -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.330 10:23:17 version -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.330 10:23:17 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.330 10:23:17 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.330 10:23:17 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.330 10:23:17 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.330 10:23:17 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.330 10:23:17 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.330 10:23:17 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.330 10:23:17 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.330 10:23:17 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.330 10:23:17 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.330 10:23:17 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.330 10:23:17 version -- scripts/common.sh@344 -- # case "$op" in 00:05:37.330 10:23:17 version -- scripts/common.sh@345 -- # : 1 00:05:37.330 10:23:17 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.330 10:23:17 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.330 10:23:17 version -- scripts/common.sh@365 -- # decimal 1 00:05:37.330 10:23:17 version -- scripts/common.sh@353 -- # local d=1 00:05:37.330 10:23:17 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.330 10:23:17 version -- scripts/common.sh@355 -- # echo 1 00:05:37.330 10:23:17 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.330 10:23:17 version -- scripts/common.sh@366 -- # decimal 2 00:05:37.330 10:23:17 version -- scripts/common.sh@353 -- # local d=2 00:05:37.330 10:23:17 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.330 10:23:17 version -- scripts/common.sh@355 -- # echo 2 00:05:37.330 10:23:17 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.330 10:23:17 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.330 10:23:17 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.330 10:23:17 version -- scripts/common.sh@368 -- # return 0 00:05:37.330 10:23:17 version -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.330 10:23:17 version -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.330 --rc genhtml_branch_coverage=1 00:05:37.330 --rc genhtml_function_coverage=1 00:05:37.330 --rc genhtml_legend=1 00:05:37.330 --rc geninfo_all_blocks=1 00:05:37.330 --rc geninfo_unexecuted_blocks=1 00:05:37.330 00:05:37.330 ' 00:05:37.330 10:23:17 version -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.330 --rc genhtml_branch_coverage=1 00:05:37.330 --rc genhtml_function_coverage=1 00:05:37.330 --rc genhtml_legend=1 00:05:37.330 --rc geninfo_all_blocks=1 00:05:37.330 --rc geninfo_unexecuted_blocks=1 00:05:37.330 00:05:37.330 ' 00:05:37.330 10:23:17 version -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.330 --rc genhtml_branch_coverage=1 00:05:37.330 --rc genhtml_function_coverage=1 00:05:37.330 --rc genhtml_legend=1 00:05:37.330 --rc geninfo_all_blocks=1 00:05:37.330 --rc geninfo_unexecuted_blocks=1 00:05:37.330 00:05:37.330 ' 00:05:37.330 10:23:17 version -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.330 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.330 --rc genhtml_branch_coverage=1 00:05:37.330 --rc genhtml_function_coverage=1 00:05:37.330 --rc genhtml_legend=1 00:05:37.330 --rc geninfo_all_blocks=1 00:05:37.330 --rc geninfo_unexecuted_blocks=1 00:05:37.330 00:05:37.330 ' 00:05:37.330 10:23:17 version -- app/version.sh@17 -- # get_header_version major 00:05:37.330 10:23:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:37.330 10:23:17 version -- app/version.sh@14 -- # cut -f2 00:05:37.330 10:23:17 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.330 10:23:17 version -- app/version.sh@17 -- # major=25 00:05:37.330 10:23:17 version -- app/version.sh@18 -- # get_header_version minor 00:05:37.330 10:23:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:37.330 10:23:17 version -- app/version.sh@14 -- # cut -f2 00:05:37.330 10:23:17 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.330 10:23:17 version -- app/version.sh@18 -- # minor=1 00:05:37.330 10:23:17 version -- app/version.sh@19 -- # get_header_version patch 00:05:37.330 10:23:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:37.330 10:23:17 version -- app/version.sh@14 -- # cut -f2 00:05:37.330 10:23:17 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.330 10:23:17 version -- app/version.sh@19 -- # patch=0 00:05:37.330 10:23:17 version -- app/version.sh@20 -- # get_header_version suffix 00:05:37.330 10:23:17 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:05:37.330 10:23:17 version -- app/version.sh@14 -- # cut -f2 00:05:37.330 10:23:17 version -- app/version.sh@14 -- # tr -d '"' 00:05:37.330 10:23:17 version -- app/version.sh@20 -- # suffix=-pre 00:05:37.330 10:23:17 version -- app/version.sh@22 -- # version=25.1 00:05:37.330 10:23:17 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:37.330 10:23:17 version -- app/version.sh@28 -- # version=25.1rc0 00:05:37.330 10:23:17 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:05:37.330 10:23:17 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:37.330 10:23:17 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:37.330 10:23:17 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:37.330 00:05:37.330 real 0m0.206s 00:05:37.330 user 0m0.113s 00:05:37.330 sys 0m0.128s 00:05:37.330 10:23:17 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.330 10:23:17 version -- common/autotest_common.sh@10 -- # set +x 00:05:37.330 ************************************ 00:05:37.330 END TEST version 00:05:37.330 ************************************ 00:05:37.330 10:23:17 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:37.330 10:23:17 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:37.330 10:23:17 -- spdk/autotest.sh@194 -- # uname -s 00:05:37.330 10:23:17 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:37.330 10:23:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:37.330 10:23:17 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:37.330 10:23:17 -- spdk/autotest.sh@207 -- # '[' 0 -eq 1 ']' 00:05:37.330 10:23:17 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:05:37.330 10:23:17 -- spdk/autotest.sh@260 -- # timing_exit lib 00:05:37.330 10:23:17 -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:37.330 10:23:17 -- common/autotest_common.sh@10 -- # set +x 00:05:37.330 10:23:18 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:05:37.330 10:23:18 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:05:37.330 10:23:18 -- spdk/autotest.sh@276 -- # '[' 1 -eq 1 ']' 00:05:37.330 10:23:18 -- spdk/autotest.sh@277 -- # export NET_TYPE 00:05:37.330 10:23:18 -- spdk/autotest.sh@280 -- # '[' tcp = rdma ']' 00:05:37.330 10:23:18 -- spdk/autotest.sh@283 -- # '[' tcp = tcp ']' 00:05:37.330 10:23:18 -- spdk/autotest.sh@284 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:37.330 10:23:18 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:37.330 10:23:18 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.330 10:23:18 -- common/autotest_common.sh@10 -- # set +x 00:05:37.330 ************************************ 00:05:37.330 START TEST nvmf_tcp 00:05:37.330 ************************************ 00:05:37.330 10:23:18 nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:05:37.590 * Looking for test storage... 00:05:37.590 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:37.590 10:23:18 nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.590 10:23:18 nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.590 10:23:18 nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.590 10:23:18 nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.590 10:23:18 nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.590 10:23:18 nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.590 10:23:18 nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.590 10:23:18 nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.590 10:23:18 nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.590 10:23:18 nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.590 10:23:18 nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.590 10:23:18 nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.590 10:23:18 nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.590 10:23:18 nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.590 10:23:18 nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.590 10:23:18 nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:05:37.591 10:23:18 nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:05:37.591 10:23:18 nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.591 10:23:18 nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.591 10:23:18 nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:05:37.591 10:23:18 nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:05:37.591 10:23:18 nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.591 10:23:18 nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:05:37.591 10:23:18 nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.591 10:23:18 nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:05:37.591 10:23:18 nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:05:37.591 10:23:18 nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.591 10:23:18 nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:05:37.591 10:23:18 nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.591 10:23:18 nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.591 10:23:18 nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.591 10:23:18 nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:05:37.591 10:23:18 nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.591 10:23:18 nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.591 --rc genhtml_branch_coverage=1 00:05:37.591 --rc genhtml_function_coverage=1 00:05:37.591 --rc genhtml_legend=1 00:05:37.591 --rc geninfo_all_blocks=1 00:05:37.591 --rc geninfo_unexecuted_blocks=1 00:05:37.591 00:05:37.591 ' 00:05:37.591 10:23:18 nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.591 --rc genhtml_branch_coverage=1 00:05:37.591 --rc genhtml_function_coverage=1 00:05:37.591 --rc genhtml_legend=1 00:05:37.591 --rc geninfo_all_blocks=1 00:05:37.591 --rc geninfo_unexecuted_blocks=1 00:05:37.591 00:05:37.591 ' 00:05:37.591 10:23:18 nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.591 --rc genhtml_branch_coverage=1 00:05:37.591 --rc genhtml_function_coverage=1 00:05:37.591 --rc genhtml_legend=1 00:05:37.591 --rc geninfo_all_blocks=1 00:05:37.591 --rc geninfo_unexecuted_blocks=1 00:05:37.591 00:05:37.591 ' 00:05:37.591 10:23:18 nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.591 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.591 --rc genhtml_branch_coverage=1 00:05:37.591 --rc genhtml_function_coverage=1 00:05:37.591 --rc genhtml_legend=1 00:05:37.591 --rc geninfo_all_blocks=1 00:05:37.591 --rc geninfo_unexecuted_blocks=1 00:05:37.591 00:05:37.591 ' 00:05:37.591 10:23:18 nvmf_tcp -- nvmf/nvmf.sh@10 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:37.591 10:23:18 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:37.591 10:23:18 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.591 10:23:18 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:37.591 ************************************ 00:05:37.591 START TEST nvmf_target_core 00:05:37.591 ************************************ 00:05:37.591 10:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:05:37.851 * Looking for test storage... 00:05:37.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lcov --version 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # IFS=.-: 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@336 -- # read -ra ver1 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # IFS=.-: 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@337 -- # read -ra ver2 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@338 -- # local 'op=<' 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@340 -- # ver1_l=2 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@341 -- # ver2_l=1 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@344 -- # case "$op" in 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@345 -- # : 1 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # decimal 1 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=1 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 1 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@365 -- # ver1[v]=1 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # decimal 2 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@353 -- # local d=2 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@355 -- # echo 2 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@366 -- # ver2[v]=2 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@368 -- # return 0 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:37.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.851 --rc genhtml_branch_coverage=1 00:05:37.851 --rc genhtml_function_coverage=1 00:05:37.851 --rc genhtml_legend=1 00:05:37.851 --rc geninfo_all_blocks=1 00:05:37.851 --rc geninfo_unexecuted_blocks=1 00:05:37.851 00:05:37.851 ' 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:37.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.851 --rc genhtml_branch_coverage=1 00:05:37.851 --rc genhtml_function_coverage=1 00:05:37.851 --rc genhtml_legend=1 00:05:37.851 --rc geninfo_all_blocks=1 00:05:37.851 --rc geninfo_unexecuted_blocks=1 00:05:37.851 00:05:37.851 ' 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:37.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.851 --rc genhtml_branch_coverage=1 00:05:37.851 --rc genhtml_function_coverage=1 00:05:37.851 --rc genhtml_legend=1 00:05:37.851 --rc geninfo_all_blocks=1 00:05:37.851 --rc geninfo_unexecuted_blocks=1 00:05:37.851 00:05:37.851 ' 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:37.851 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:37.851 --rc genhtml_branch_coverage=1 00:05:37.851 --rc genhtml_function_coverage=1 00:05:37.851 --rc genhtml_legend=1 00:05:37.851 --rc geninfo_all_blocks=1 00:05:37.851 --rc geninfo_unexecuted_blocks=1 00:05:37.851 00:05:37.851 ' 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@15 -- # shopt -s extglob 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@50 -- # : 0 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:37.851 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:05:37.852 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:05:37.852 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:05:37.852 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:05:37.852 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@54 -- # have_pci_nics=0 00:05:37.852 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:05:37.852 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@13 -- # TEST_ARGS=("$@") 00:05:37.852 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@15 -- # [[ 0 -eq 0 ]] 00:05:37.852 10:23:18 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:37.852 10:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:37.852 10:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.852 10:23:18 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:37.852 ************************************ 00:05:37.852 START TEST nvmf_abort 00:05:37.852 ************************************ 00:05:37.852 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:05:37.852 * Looking for test storage... 00:05:37.852 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:37.852 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:38.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.112 --rc genhtml_branch_coverage=1 00:05:38.112 --rc genhtml_function_coverage=1 00:05:38.112 --rc genhtml_legend=1 00:05:38.112 --rc geninfo_all_blocks=1 00:05:38.112 --rc geninfo_unexecuted_blocks=1 00:05:38.112 00:05:38.112 ' 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:38.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.112 --rc genhtml_branch_coverage=1 00:05:38.112 --rc genhtml_function_coverage=1 00:05:38.112 --rc genhtml_legend=1 00:05:38.112 --rc geninfo_all_blocks=1 00:05:38.112 --rc geninfo_unexecuted_blocks=1 00:05:38.112 00:05:38.112 ' 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:38.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.112 --rc genhtml_branch_coverage=1 00:05:38.112 --rc genhtml_function_coverage=1 00:05:38.112 --rc genhtml_legend=1 00:05:38.112 --rc geninfo_all_blocks=1 00:05:38.112 --rc geninfo_unexecuted_blocks=1 00:05:38.112 00:05:38.112 ' 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:38.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:38.112 --rc genhtml_branch_coverage=1 00:05:38.112 --rc genhtml_function_coverage=1 00:05:38.112 --rc genhtml_legend=1 00:05:38.112 --rc geninfo_all_blocks=1 00:05:38.112 --rc geninfo_unexecuted_blocks=1 00:05:38.112 00:05:38.112 ' 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:38.112 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:05:38.113 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:05:38.113 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:05:38.113 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:05:38.113 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:05:38.113 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:05:38.113 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:05:38.113 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:05:38.113 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:05:38.113 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:38.113 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:05:38.113 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:05:38.113 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:05:38.113 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:05:38.113 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:05:38.113 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:05:38.113 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:05:38.113 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:05:38.113 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # xtrace_disable 00:05:38.113 10:23:18 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@131 -- # pci_devs=() 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@131 -- # local -a pci_devs 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@132 -- # pci_net_devs=() 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@133 -- # pci_drivers=() 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@133 -- # local -A pci_drivers 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@135 -- # net_devs=() 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@135 -- # local -ga net_devs 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@136 -- # e810=() 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@136 -- # local -ga e810 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@137 -- # x722=() 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@137 -- # local -ga x722 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@138 -- # mlx=() 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@138 -- # local -ga mlx 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:44.685 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:44.685 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:44.685 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:44.685 Found net devices under 0000:86:00.0: cvl_0_0 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:44.686 Found net devices under 0000:86:00.1: cvl_0_1 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@262 -- # is_hw=yes 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@257 -- # create_target_ns 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:05:44.686 10.0.0.1 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:05:44.686 10.0.0.2 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:05:44.686 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 1 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator0 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:05:44.687 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:44.687 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.486 ms 00:05:44.687 00:05:44.687 --- 10.0.0.1 ping statistics --- 00:05:44.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:44.687 rtt min/avg/max/mdev = 0.486/0.486/0.486/0.000 ms 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:05:44.687 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:44.687 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.132 ms 00:05:44.687 00:05:44.687 --- 10.0.0.2 ping statistics --- 00:05:44.687 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:44.687 rtt min/avg/max/mdev = 0.132/0.132/0.132/0.000 ms 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair++ )) 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # return 0 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator0 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator1 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # return 1 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev= 00:05:44.687 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@169 -- # return 0 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target1 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target1 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@109 -- # return 1 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@168 -- # dev= 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@169 -- # return 0 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=3050042 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 3050042 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3050042 ']' 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:44.688 10:23:24 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:44.688 [2024-11-20 10:23:24.912654] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:44.688 [2024-11-20 10:23:24.912699] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:44.688 [2024-11-20 10:23:24.990750] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:44.688 [2024-11-20 10:23:25.033857] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:44.688 [2024-11-20 10:23:25.033893] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:44.688 [2024-11-20 10:23:25.033900] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:44.688 [2024-11-20 10:23:25.033907] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:44.688 [2024-11-20 10:23:25.033912] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:44.688 [2024-11-20 10:23:25.035276] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:44.688 [2024-11-20 10:23:25.035381] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:44.688 [2024-11-20 10:23:25.035382] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:44.688 [2024-11-20 10:23:25.171328] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:44.688 Malloc0 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:44.688 Delay0 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:44.688 [2024-11-20 10:23:25.257480] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:44.688 10:23:25 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:05:44.688 [2024-11-20 10:23:25.393882] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:05:47.224 Initializing NVMe Controllers 00:05:47.224 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:05:47.224 controller IO queue size 128 less than required 00:05:47.224 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:05:47.224 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:05:47.224 Initialization complete. Launching workers. 00:05:47.224 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 37393 00:05:47.224 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 37454, failed to submit 62 00:05:47.224 success 37397, unsuccessful 57, failed 0 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:05:47.224 rmmod nvme_tcp 00:05:47.224 rmmod nvme_fabrics 00:05:47.224 rmmod nvme_keyring 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 3050042 ']' 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 3050042 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3050042 ']' 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3050042 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3050042 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3050042' 00:05:47.224 killing process with pid 3050042 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3050042 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3050042 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@264 -- # local dev 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@267 -- # remove_target_ns 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:05:47.224 10:23:27 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@268 -- # delete_main_bridge 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@130 -- # return 0 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/setup.sh@284 -- # iptr 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@542 -- # iptables-save 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:05:49.126 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@542 -- # iptables-restore 00:05:49.385 00:05:49.385 real 0m11.370s 00:05:49.385 user 0m11.665s 00:05:49.385 sys 0m5.555s 00:05:49.385 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:49.385 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:05:49.385 ************************************ 00:05:49.385 END TEST nvmf_abort 00:05:49.385 ************************************ 00:05:49.385 10:23:29 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@17 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:49.385 10:23:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:49.385 10:23:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:49.385 10:23:29 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:05:49.385 ************************************ 00:05:49.385 START TEST nvmf_ns_hotplug_stress 00:05:49.385 ************************************ 00:05:49.385 10:23:29 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:05:49.385 * Looking for test storage... 00:05:49.385 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:05:49.385 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:05:49.386 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:49.386 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:05:49.386 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:05:49.386 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:05:49.386 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:05:49.386 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:49.386 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:05:49.386 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:05:49.386 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:49.386 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:49.386 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:05:49.386 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:49.386 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:05:49.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.386 --rc genhtml_branch_coverage=1 00:05:49.386 --rc genhtml_function_coverage=1 00:05:49.386 --rc genhtml_legend=1 00:05:49.386 --rc geninfo_all_blocks=1 00:05:49.386 --rc geninfo_unexecuted_blocks=1 00:05:49.386 00:05:49.386 ' 00:05:49.386 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:05:49.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.386 --rc genhtml_branch_coverage=1 00:05:49.386 --rc genhtml_function_coverage=1 00:05:49.386 --rc genhtml_legend=1 00:05:49.386 --rc geninfo_all_blocks=1 00:05:49.386 --rc geninfo_unexecuted_blocks=1 00:05:49.386 00:05:49.386 ' 00:05:49.386 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:05:49.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.386 --rc genhtml_branch_coverage=1 00:05:49.386 --rc genhtml_function_coverage=1 00:05:49.386 --rc genhtml_legend=1 00:05:49.386 --rc geninfo_all_blocks=1 00:05:49.386 --rc geninfo_unexecuted_blocks=1 00:05:49.386 00:05:49.386 ' 00:05:49.386 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:05:49.386 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:49.386 --rc genhtml_branch_coverage=1 00:05:49.386 --rc genhtml_function_coverage=1 00:05:49.386 --rc genhtml_legend=1 00:05:49.386 --rc geninfo_all_blocks=1 00:05:49.386 --rc geninfo_unexecuted_blocks=1 00:05:49.386 00:05:49.386 ' 00:05:49.386 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:49.386 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:05:49.645 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:05:49.645 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:05:49.646 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:05:49.646 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:05:49.646 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:05:49.646 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:05:49.646 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:05:49.646 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:05:49.646 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:05:49.646 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:05:49.646 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:05:49.646 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:05:49.646 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:05:49.646 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:05:49.646 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:05:49.646 10:23:30 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # net_devs=() 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # e810=() 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # local -ga e810 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # x722=() 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # local -ga x722 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # mlx=() 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:05:56.218 Found 0000:86:00.0 (0x8086 - 0x159b) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:05:56.218 Found 0000:86:00.1 (0x8086 - 0x159b) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:05:56.218 Found net devices under 0000:86:00.0: cvl_0_0 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:05:56.218 Found net devices under 0000:86:00.1: cvl_0_1 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:05:56.218 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # create_target_ns 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:05:56.219 10:23:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:05:56.219 10.0.0.1 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:05:56.219 10.0.0.2 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:05:56.219 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:05:56.220 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:05:56.220 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.315 ms 00:05:56.220 00:05:56.220 --- 10.0.0.1 ping statistics --- 00:05:56.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:56.220 rtt min/avg/max/mdev = 0.315/0.315/0.315/0.000 ms 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:05:56.220 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:05:56.220 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:05:56.220 00:05:56.220 --- 10.0.0.2 ping statistics --- 00:05:56.220 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:05:56.220 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair++ )) 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # return 0 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # return 1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev= 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@169 -- # return 0 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # return 1 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev= 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@169 -- # return 0 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:05:56.220 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=3054184 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 3054184 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3054184 ']' 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.221 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:56.221 [2024-11-20 10:23:36.339627] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:05:56.221 [2024-11-20 10:23:36.339675] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:56.221 [2024-11-20 10:23:36.419057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.221 [2024-11-20 10:23:36.460442] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:05:56.221 [2024-11-20 10:23:36.460478] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:05:56.221 [2024-11-20 10:23:36.460485] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:05:56.221 [2024-11-20 10:23:36.460491] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:05:56.221 [2024-11-20 10:23:36.460495] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:05:56.221 [2024-11-20 10:23:36.461931] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:05:56.221 [2024-11-20 10:23:36.462037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.221 [2024-11-20 10:23:36.462037] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:05:56.221 [2024-11-20 10:23:36.758446] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:56.221 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:05:56.479 10:23:36 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:05:56.479 [2024-11-20 10:23:37.143840] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:05:56.479 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:05:56.738 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:05:56.998 Malloc0 00:05:56.998 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:05:57.256 Delay0 00:05:57.257 10:23:37 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:57.515 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:05:57.515 NULL1 00:05:57.515 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:05:57.774 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3054451 00:05:57.774 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:05:57.775 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:05:57.775 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.033 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.292 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:05:58.292 10:23:38 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:05:58.292 true 00:05:58.568 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:05:58.568 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:58.568 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:58.854 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:05:58.854 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:05:59.124 true 00:05:59.124 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:05:59.124 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.383 10:23:39 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:05:59.383 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:05:59.383 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:05:59.642 true 00:05:59.642 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:05:59.642 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:05:59.901 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.161 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:00.161 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:00.420 true 00:06:00.420 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:00.420 10:23:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:00.680 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:00.680 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:00.680 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:00.938 true 00:06:00.938 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:00.938 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.196 10:23:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.455 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:01.455 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:01.715 true 00:06:01.715 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:01.715 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:01.974 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:01.974 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:01.974 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:06:02.232 true 00:06:02.232 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:02.232 10:23:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:02.491 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:02.749 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:06:02.749 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:06:03.008 true 00:06:03.008 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:03.008 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.267 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:03.267 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:06:03.267 10:23:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:06:03.525 true 00:06:03.525 10:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:03.525 10:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:03.783 10:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.042 10:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:06:04.042 10:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:06:04.301 true 00:06:04.301 10:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:04.301 10:23:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:04.560 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:04.560 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:06:04.560 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:06:04.820 true 00:06:04.820 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:04.820 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.079 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.337 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:06:05.337 10:23:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:06:05.595 true 00:06:05.595 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:05.595 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:05.595 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:05.854 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:06:05.854 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:06:06.112 true 00:06:06.112 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:06.112 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.371 10:23:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:06.630 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:06:06.630 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:06:06.889 true 00:06:06.889 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:06.889 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:06.889 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.148 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:06:07.148 10:23:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:06:07.406 true 00:06:07.406 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:07.406 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:07.665 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:07.924 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:06:07.924 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:06:07.924 true 00:06:08.183 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:08.183 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.183 10:23:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:08.442 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:06:08.442 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:06:08.700 true 00:06:08.700 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:08.700 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:08.958 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.218 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:06:09.218 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:06:09.218 true 00:06:09.218 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:09.218 10:23:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:09.476 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:09.735 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:06:09.735 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:06:09.994 true 00:06:09.994 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:09.994 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.253 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:10.253 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:06:10.253 10:23:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:06:10.512 true 00:06:10.512 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:10.512 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:10.770 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.028 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:06:11.028 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:06:11.287 true 00:06:11.287 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:11.287 10:23:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:11.287 10:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:11.546 10:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:06:11.546 10:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:06:11.805 true 00:06:11.805 10:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:11.805 10:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.062 10:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.320 10:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:06:12.320 10:23:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:06:12.578 true 00:06:12.578 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:12.578 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:12.578 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:12.836 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:06:12.836 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:06:13.095 true 00:06:13.095 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:13.095 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.353 10:23:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:13.632 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:06:13.632 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:06:13.632 true 00:06:13.897 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:13.897 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:13.897 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.156 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:06:14.156 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:06:14.415 true 00:06:14.415 10:23:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:14.415 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:14.674 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:14.932 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:06:14.932 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:06:14.932 true 00:06:14.932 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:15.191 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.191 10:23:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:15.450 10:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:06:15.450 10:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:06:15.709 true 00:06:15.709 10:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:15.709 10:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:15.967 10:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.226 10:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:06:16.226 10:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:06:16.226 true 00:06:16.226 10:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:16.485 10:23:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:16.485 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:16.744 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:06:16.744 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:06:17.002 true 00:06:17.002 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:17.002 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.261 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:17.521 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:06:17.521 10:23:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:06:17.521 true 00:06:17.521 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:17.521 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:17.780 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.039 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:06:18.039 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:06:18.298 true 00:06:18.298 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:18.298 10:23:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:18.556 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:18.556 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:06:18.556 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:06:18.814 true 00:06:18.814 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:18.814 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.073 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:19.331 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:06:19.331 10:23:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:06:19.591 true 00:06:19.591 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:19.591 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:19.850 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.109 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:06:20.109 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:06:20.109 true 00:06:20.109 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:20.109 10:24:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:20.368 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:20.626 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:06:20.626 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:06:20.884 true 00:06:20.884 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:20.884 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.144 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.403 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:06:21.403 10:24:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:06:21.403 true 00:06:21.403 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:21.403 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:21.688 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:21.973 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:06:21.973 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:06:22.232 true 00:06:22.232 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:22.232 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:22.491 10:24:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:22.491 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:06:22.491 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:06:22.750 true 00:06:22.750 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:22.750 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.010 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.269 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:06:23.269 10:24:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:06:23.527 true 00:06:23.528 10:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:23.528 10:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:23.786 10:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:23.787 10:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:06:23.787 10:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:06:24.046 true 00:06:24.046 10:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:24.046 10:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:24.305 10:24:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:24.563 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:06:24.563 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:06:24.822 true 00:06:24.822 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:24.822 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.081 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.081 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:06:25.081 10:24:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:06:25.340 true 00:06:25.340 10:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:25.340 10:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:25.599 10:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:25.857 10:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:06:25.857 10:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:06:26.116 true 00:06:26.116 10:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:26.116 10:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.375 10:24:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:26.634 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:06:26.634 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:06:26.634 true 00:06:26.634 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:26.634 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:26.893 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.152 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:06:27.152 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:06:27.410 true 00:06:27.410 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:27.411 10:24:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:27.669 10:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:27.928 10:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:06:27.928 10:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:06:27.928 true 00:06:27.928 10:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:27.928 10:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.187 Initializing NVMe Controllers 00:06:28.187 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:28.187 Controller IO queue size 128, less than required. 00:06:28.187 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:28.187 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:06:28.187 Initialization complete. Launching workers. 00:06:28.187 ======================================================== 00:06:28.187 Latency(us) 00:06:28.187 Device Information : IOPS MiB/s Average min max 00:06:28.187 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 27091.59 13.23 4724.71 2169.71 43838.33 00:06:28.187 ======================================================== 00:06:28.187 Total : 27091.59 13.23 4724.71 2169.71 43838.33 00:06:28.187 00:06:28.187 10:24:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:28.445 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:06:28.445 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:06:28.704 true 00:06:28.704 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3054451 00:06:28.704 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3054451) - No such process 00:06:28.704 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3054451 00:06:28.704 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:28.704 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:28.962 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:06:28.962 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:06:28.962 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:06:28.962 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:28.962 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:06:29.221 null0 00:06:29.221 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:29.221 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:29.221 10:24:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:06:29.480 null1 00:06:29.480 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:29.480 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:29.480 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:06:29.480 null2 00:06:29.480 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:29.480 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:29.480 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:06:29.739 null3 00:06:29.739 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:29.739 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:29.739 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:06:29.998 null4 00:06:29.998 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:29.998 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:29.998 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:06:30.257 null5 00:06:30.257 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:30.257 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:30.257 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:06:30.257 null6 00:06:30.516 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:30.516 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:30.516 10:24:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:06:30.516 null7 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3060637 3060638 3060640 3060643 3060644 3060646 3060648 3060650 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:30.517 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:30.776 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:30.776 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:30.776 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:30.776 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:30.776 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:30.776 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:30.776 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:30.776 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.036 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:31.296 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:31.296 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:31.296 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.296 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:31.296 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:31.296 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:31.296 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:31.296 10:24:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.555 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.556 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:31.556 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:31.556 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:31.556 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:31.556 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:31.556 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:31.556 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:31.556 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:31.556 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:31.814 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:31.815 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:32.072 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:32.073 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:32.073 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:32.073 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:32.073 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:32.073 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:32.073 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.073 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.332 10:24:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:32.332 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:32.591 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:32.850 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:32.850 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:32.850 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:32.850 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:32.850 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:32.850 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:32.850 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:32.850 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:33.110 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:33.369 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:33.369 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.369 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:33.369 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:33.369 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:33.369 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:33.369 10:24:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:33.369 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.369 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.369 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:33.628 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:33.888 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.888 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.888 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:33.888 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.888 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.888 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:33.888 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.888 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.888 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:33.888 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.888 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.888 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.888 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:33.888 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.888 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:33.888 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.888 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.888 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:33.889 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.889 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.889 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:33.889 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:33.889 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:33.889 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:34.148 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:34.148 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:34.148 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:34.148 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:34.148 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:34.148 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:34.148 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.148 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:34.148 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.148 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.148 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.408 10:24:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:06:34.408 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:06:34.408 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:06:34.408 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:06:34.667 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:06:34.667 rmmod nvme_tcp 00:06:34.926 rmmod nvme_fabrics 00:06:34.926 rmmod nvme_keyring 00:06:34.926 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:06:34.926 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:06:34.926 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:06:34.926 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 3054184 ']' 00:06:34.926 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 3054184 00:06:34.926 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3054184 ']' 00:06:34.926 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3054184 00:06:34.926 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:06:34.926 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:34.926 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3054184 00:06:34.926 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:06:34.926 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:06:34.926 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3054184' 00:06:34.926 killing process with pid 3054184 00:06:34.926 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3054184 00:06:34.926 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3054184 00:06:35.185 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:06:35.185 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:06:35.185 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@264 -- # local dev 00:06:35.185 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@267 -- # remove_target_ns 00:06:35.185 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:06:35.185 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:06:35.185 10:24:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:06:37.090 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@268 -- # delete_main_bridge 00:06:37.090 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:06:37.090 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@130 -- # return 0 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/setup.sh@284 -- # iptr 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # iptables-save 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # iptables-restore 00:06:37.091 00:06:37.091 real 0m47.807s 00:06:37.091 user 3m22.691s 00:06:37.091 sys 0m17.281s 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:37.091 ************************************ 00:06:37.091 END TEST nvmf_ns_hotplug_stress 00:06:37.091 ************************************ 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:37.091 10:24:17 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:37.350 ************************************ 00:06:37.350 START TEST nvmf_delete_subsystem 00:06:37.350 ************************************ 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:06:37.350 * Looking for test storage... 00:06:37.350 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:37.350 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:37.350 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.350 --rc genhtml_branch_coverage=1 00:06:37.350 --rc genhtml_function_coverage=1 00:06:37.350 --rc genhtml_legend=1 00:06:37.350 --rc geninfo_all_blocks=1 00:06:37.350 --rc geninfo_unexecuted_blocks=1 00:06:37.350 00:06:37.350 ' 00:06:37.351 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:37.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.351 --rc genhtml_branch_coverage=1 00:06:37.351 --rc genhtml_function_coverage=1 00:06:37.351 --rc genhtml_legend=1 00:06:37.351 --rc geninfo_all_blocks=1 00:06:37.351 --rc geninfo_unexecuted_blocks=1 00:06:37.351 00:06:37.351 ' 00:06:37.351 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:37.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.351 --rc genhtml_branch_coverage=1 00:06:37.351 --rc genhtml_function_coverage=1 00:06:37.351 --rc genhtml_legend=1 00:06:37.351 --rc geninfo_all_blocks=1 00:06:37.351 --rc geninfo_unexecuted_blocks=1 00:06:37.351 00:06:37.351 ' 00:06:37.351 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:37.351 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:37.351 --rc genhtml_branch_coverage=1 00:06:37.351 --rc genhtml_function_coverage=1 00:06:37.351 --rc genhtml_legend=1 00:06:37.351 --rc geninfo_all_blocks=1 00:06:37.351 --rc geninfo_unexecuted_blocks=1 00:06:37.351 00:06:37.351 ' 00:06:37.351 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:37.351 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:06:37.351 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:37.351 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:37.351 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:37.351 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:37.351 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:37.351 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:06:37.351 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:37.351 10:24:17 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:06:37.351 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # xtrace_disable 00:06:37.351 10:24:18 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # pci_devs=() 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # net_devs=() 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # e810=() 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # local -ga e810 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # x722=() 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # local -ga x722 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # mlx=() 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # local -ga mlx 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:43.920 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:43.920 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:43.920 Found net devices under 0000:86:00.0: cvl_0_0 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:43.920 Found net devices under 0000:86:00.1: cvl_0_1 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # is_hw=yes 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # create_target_ns 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:06:43.920 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:06:43.921 10.0.0.1 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:06:43.921 10.0.0.2 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:06:43.921 10:24:23 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:06:43.921 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:43.921 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.403 ms 00:06:43.921 00:06:43.921 --- 10.0.0.1 ping statistics --- 00:06:43.921 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.921 rtt min/avg/max/mdev = 0.403/0.403/0.403/0.000 ms 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:43.921 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:06:43.922 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:43.922 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:06:43.922 00:06:43.922 --- 10.0.0.2 ping statistics --- 00:06:43.922 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:43.922 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair++ )) 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # return 0 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator1 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # return 1 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev= 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@169 -- # return 0 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target1 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target1 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # return 1 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev= 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@169 -- # return 0 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=3065239 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 3065239 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3065239 ']' 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:43.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:43.922 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:43.923 10:24:24 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:43.923 [2024-11-20 10:24:24.239959] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:06:43.923 [2024-11-20 10:24:24.240003] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:43.923 [2024-11-20 10:24:24.320076] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:43.923 [2024-11-20 10:24:24.358714] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:43.923 [2024-11-20 10:24:24.358750] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:43.923 [2024-11-20 10:24:24.358757] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:43.923 [2024-11-20 10:24:24.358763] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:43.923 [2024-11-20 10:24:24.358768] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:43.923 [2024-11-20 10:24:24.360014] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:06:43.923 [2024-11-20 10:24:24.360029] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.491 [2024-11-20 10:24:25.108906] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.491 [2024-11-20 10:24:25.129105] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.491 NULL1 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.491 Delay0 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3065309 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:06:44.491 10:24:25 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:44.751 [2024-11-20 10:24:25.240042] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:46.657 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:06:46.657 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:46.657 10:24:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:46.657 Write completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Write completed with error (sct=0, sc=8) 00:06:46.657 starting I/O failed: -6 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Write completed with error (sct=0, sc=8) 00:06:46.657 starting I/O failed: -6 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Write completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Write completed with error (sct=0, sc=8) 00:06:46.657 starting I/O failed: -6 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Write completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 starting I/O failed: -6 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Write completed with error (sct=0, sc=8) 00:06:46.657 Write completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 starting I/O failed: -6 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Write completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 starting I/O failed: -6 00:06:46.657 Write completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Write completed with error (sct=0, sc=8) 00:06:46.657 starting I/O failed: -6 00:06:46.657 Write completed with error (sct=0, sc=8) 00:06:46.657 Write completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 starting I/O failed: -6 00:06:46.657 Write completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 starting I/O failed: -6 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Write completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 starting I/O failed: -6 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 Read completed with error (sct=0, sc=8) 00:06:46.657 starting I/O failed: -6 00:06:46.657 Write completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 [2024-11-20 10:24:27.355134] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21764a0 is same with the state(6) to be set 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 Write completed with error (sct=0, sc=8) 00:06:46.658 starting I/O failed: -6 00:06:46.658 Read completed with error (sct=0, sc=8) 00:06:46.659 Write completed with error (sct=0, sc=8) 00:06:46.659 starting I/O failed: -6 00:06:46.659 Read completed with error (sct=0, sc=8) 00:06:46.659 Read completed with error (sct=0, sc=8) 00:06:46.659 starting I/O failed: -6 00:06:46.659 starting I/O failed: -6 00:06:46.659 starting I/O failed: -6 00:06:46.659 starting I/O failed: -6 00:06:46.659 starting I/O failed: -6 00:06:46.659 starting I/O failed: -6 00:06:46.659 starting I/O failed: -6 00:06:46.659 starting I/O failed: -6 00:06:46.659 starting I/O failed: -6 00:06:46.659 starting I/O failed: -6 00:06:46.659 starting I/O failed: -6 00:06:46.659 starting I/O failed: -6 00:06:46.659 starting I/O failed: -6 00:06:46.659 starting I/O failed: -6 00:06:48.037 [2024-11-20 10:24:28.333862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21779a0 is same with the state(6) to be set 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 [2024-11-20 10:24:28.358327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21762c0 is same with the state(6) to be set 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 [2024-11-20 10:24:28.358526] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2176680 is same with the state(6) to be set 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 [2024-11-20 10:24:28.361666] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eff2c00d020 is same with the state(6) to be set 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Write completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.037 Read completed with error (sct=0, sc=8) 00:06:48.038 Read completed with error (sct=0, sc=8) 00:06:48.038 Read completed with error (sct=0, sc=8) 00:06:48.038 Read completed with error (sct=0, sc=8) 00:06:48.038 Read completed with error (sct=0, sc=8) 00:06:48.038 Read completed with error (sct=0, sc=8) 00:06:48.038 Read completed with error (sct=0, sc=8) 00:06:48.038 Read completed with error (sct=0, sc=8) 00:06:48.038 Write completed with error (sct=0, sc=8) 00:06:48.038 Read completed with error (sct=0, sc=8) 00:06:48.038 Read completed with error (sct=0, sc=8) 00:06:48.038 Write completed with error (sct=0, sc=8) 00:06:48.038 Read completed with error (sct=0, sc=8) 00:06:48.038 Write completed with error (sct=0, sc=8) 00:06:48.038 Read completed with error (sct=0, sc=8) 00:06:48.038 Read completed with error (sct=0, sc=8) 00:06:48.038 Write completed with error (sct=0, sc=8) 00:06:48.038 Write completed with error (sct=0, sc=8) 00:06:48.038 Write completed with error (sct=0, sc=8) 00:06:48.038 Write completed with error (sct=0, sc=8) 00:06:48.038 Write completed with error (sct=0, sc=8) 00:06:48.038 Write completed with error (sct=0, sc=8) 00:06:48.038 Read completed with error (sct=0, sc=8) 00:06:48.038 Read completed with error (sct=0, sc=8) 00:06:48.038 Read completed with error (sct=0, sc=8) 00:06:48.038 Write completed with error (sct=0, sc=8) 00:06:48.038 Write completed with error (sct=0, sc=8) 00:06:48.038 Write completed with error (sct=0, sc=8) 00:06:48.038 Write completed with error (sct=0, sc=8) 00:06:48.038 [2024-11-20 10:24:28.362621] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7eff2c00d7e0 is same with the state(6) to be set 00:06:48.038 Initializing NVMe Controllers 00:06:48.038 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:48.038 Controller IO queue size 128, less than required. 00:06:48.038 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:48.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:48.038 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:48.038 Initialization complete. Launching workers. 00:06:48.038 ======================================================== 00:06:48.038 Latency(us) 00:06:48.038 Device Information : IOPS MiB/s Average min max 00:06:48.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 174.26 0.09 885623.10 327.97 1006208.92 00:06:48.038 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 181.73 0.09 921582.85 291.36 1010215.94 00:06:48.038 ======================================================== 00:06:48.038 Total : 356.00 0.17 903980.17 291.36 1010215.94 00:06:48.038 00:06:48.038 [2024-11-20 10:24:28.363159] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x21779a0 (9): Bad file descriptor 00:06:48.038 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:06:48.038 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.038 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:06:48.038 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3065309 00:06:48.038 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:06:48.297 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:06:48.297 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3065309 00:06:48.298 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3065309) - No such process 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3065309 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3065309 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3065309 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.298 [2024-11-20 10:24:28.893370] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3066002 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3066002 00:06:48.298 10:24:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:48.298 [2024-11-20 10:24:28.981589] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:06:48.865 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:48.866 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3066002 00:06:48.866 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:49.433 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:49.433 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3066002 00:06:49.433 10:24:29 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:50.001 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:50.001 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3066002 00:06:50.001 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:50.259 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:50.259 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3066002 00:06:50.259 10:24:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:50.827 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:50.827 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3066002 00:06:50.827 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:51.395 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:51.395 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3066002 00:06:51.395 10:24:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:06:51.395 Initializing NVMe Controllers 00:06:51.395 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:06:51.395 Controller IO queue size 128, less than required. 00:06:51.395 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:06:51.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:06:51.395 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:06:51.395 Initialization complete. Launching workers. 00:06:51.395 ======================================================== 00:06:51.395 Latency(us) 00:06:51.395 Device Information : IOPS MiB/s Average min max 00:06:51.395 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1002146.77 1000126.58 1006041.98 00:06:51.395 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005005.05 1000137.24 1042631.83 00:06:51.395 ======================================================== 00:06:51.395 Total : 256.00 0.12 1003575.91 1000126.58 1042631.83 00:06:51.395 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3066002 00:06:51.963 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3066002) - No such process 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3066002 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:06:51.963 rmmod nvme_tcp 00:06:51.963 rmmod nvme_fabrics 00:06:51.963 rmmod nvme_keyring 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 3065239 ']' 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 3065239 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3065239 ']' 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3065239 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3065239 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3065239' 00:06:51.963 killing process with pid 3065239 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3065239 00:06:51.963 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3065239 00:06:52.223 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:06:52.223 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:06:52.223 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@264 -- # local dev 00:06:52.223 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@267 -- # remove_target_ns 00:06:52.223 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:06:52.223 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:06:52.223 10:24:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@268 -- # delete_main_bridge 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@130 -- # return 0 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/setup.sh@284 -- # iptr 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # iptables-save 00:06:54.130 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:06:54.131 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # iptables-restore 00:06:54.131 00:06:54.131 real 0m16.984s 00:06:54.131 user 0m30.650s 00:06:54.131 sys 0m5.616s 00:06:54.131 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:54.131 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:06:54.131 ************************************ 00:06:54.131 END TEST nvmf_delete_subsystem 00:06:54.131 ************************************ 00:06:54.131 10:24:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:54.131 10:24:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:54.131 10:24:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:54.131 10:24:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:54.392 ************************************ 00:06:54.392 START TEST nvmf_host_management 00:06:54.392 ************************************ 00:06:54.392 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:06:54.392 * Looking for test storage... 00:06:54.392 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:54.392 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:06:54.392 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:06:54.392 10:24:34 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:06:54.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.392 --rc genhtml_branch_coverage=1 00:06:54.392 --rc genhtml_function_coverage=1 00:06:54.392 --rc genhtml_legend=1 00:06:54.392 --rc geninfo_all_blocks=1 00:06:54.392 --rc geninfo_unexecuted_blocks=1 00:06:54.392 00:06:54.392 ' 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:06:54.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.392 --rc genhtml_branch_coverage=1 00:06:54.392 --rc genhtml_function_coverage=1 00:06:54.392 --rc genhtml_legend=1 00:06:54.392 --rc geninfo_all_blocks=1 00:06:54.392 --rc geninfo_unexecuted_blocks=1 00:06:54.392 00:06:54.392 ' 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:06:54.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.392 --rc genhtml_branch_coverage=1 00:06:54.392 --rc genhtml_function_coverage=1 00:06:54.392 --rc genhtml_legend=1 00:06:54.392 --rc geninfo_all_blocks=1 00:06:54.392 --rc geninfo_unexecuted_blocks=1 00:06:54.392 00:06:54.392 ' 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:06:54.392 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:54.392 --rc genhtml_branch_coverage=1 00:06:54.392 --rc genhtml_function_coverage=1 00:06:54.392 --rc genhtml_legend=1 00:06:54.392 --rc geninfo_all_blocks=1 00:06:54.392 --rc geninfo_unexecuted_blocks=1 00:06:54.392 00:06:54.392 ' 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:06:54.392 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:06:54.393 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # xtrace_disable 00:06:54.393 10:24:35 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@131 -- # pci_devs=() 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@131 -- # local -a pci_devs 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@132 -- # pci_net_devs=() 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@133 -- # pci_drivers=() 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@133 -- # local -A pci_drivers 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@135 -- # net_devs=() 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@135 -- # local -ga net_devs 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@136 -- # e810=() 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@136 -- # local -ga e810 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@137 -- # x722=() 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@137 -- # local -ga x722 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@138 -- # mlx=() 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@138 -- # local -ga mlx 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:01.087 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:01.087 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:01.087 Found net devices under 0000:86:00.0: cvl_0_0 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:01.087 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:01.088 Found net devices under 0000:86:00.1: cvl_0_1 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@262 -- # is_hw=yes 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@257 -- # create_target_ns 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:07:01.088 10.0.0.1 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:07:01.088 10.0.0.2 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:07:01.088 10:24:40 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 1 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator0 00:07:01.088 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:07:01.089 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:01.089 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.316 ms 00:07:01.089 00:07:01.089 --- 10.0.0.1 ping statistics --- 00:07:01.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.089 rtt min/avg/max/mdev = 0.316/0.316/0.316/0.000 ms 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:07:01.089 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:01.089 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:07:01.089 00:07:01.089 --- 10.0.0.2 ping statistics --- 00:07:01.089 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:01.089 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair++ )) 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # return 0 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator0 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # return 1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev= 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@169 -- # return 0 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target1 00:07:01.089 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target1 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@109 -- # return 1 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@168 -- # dev= 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@169 -- # return 0 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=3070258 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 3070258 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3070258 ']' 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:01.090 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.090 10:24:41 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.090 [2024-11-20 10:24:41.284418] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:01.090 [2024-11-20 10:24:41.284463] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:01.090 [2024-11-20 10:24:41.362549] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:01.090 [2024-11-20 10:24:41.405255] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:01.090 [2024-11-20 10:24:41.405293] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:01.090 [2024-11-20 10:24:41.405300] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:01.090 [2024-11-20 10:24:41.405306] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:01.090 [2024-11-20 10:24:41.405311] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:01.090 [2024-11-20 10:24:41.406868] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:01.090 [2024-11-20 10:24:41.406975] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:07:01.090 [2024-11-20 10:24:41.407059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:01.090 [2024-11-20 10:24:41.407059] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.656 [2024-11-20 10:24:42.169143] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.656 Malloc0 00:07:01.656 [2024-11-20 10:24:42.245905] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.656 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3070510 00:07:01.657 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3070510 /var/tmp/bdevperf.sock 00:07:01.657 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3070510 ']' 00:07:01.657 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:01.657 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:01.657 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:01.657 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:01.657 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:01.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:01.657 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:07:01.657 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:01.657 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:07:01.657 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:01.657 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:07:01.657 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:07:01.657 { 00:07:01.657 "params": { 00:07:01.657 "name": "Nvme$subsystem", 00:07:01.657 "trtype": "$TEST_TRANSPORT", 00:07:01.657 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:01.657 "adrfam": "ipv4", 00:07:01.657 "trsvcid": "$NVMF_PORT", 00:07:01.657 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:01.657 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:01.657 "hdgst": ${hdgst:-false}, 00:07:01.657 "ddgst": ${ddgst:-false} 00:07:01.657 }, 00:07:01.657 "method": "bdev_nvme_attach_controller" 00:07:01.657 } 00:07:01.657 EOF 00:07:01.657 )") 00:07:01.657 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:07:01.657 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:07:01.657 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:07:01.657 10:24:42 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:07:01.657 "params": { 00:07:01.657 "name": "Nvme0", 00:07:01.657 "trtype": "tcp", 00:07:01.657 "traddr": "10.0.0.2", 00:07:01.657 "adrfam": "ipv4", 00:07:01.657 "trsvcid": "4420", 00:07:01.657 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:01.657 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:01.657 "hdgst": false, 00:07:01.657 "ddgst": false 00:07:01.657 }, 00:07:01.657 "method": "bdev_nvme_attach_controller" 00:07:01.657 }' 00:07:01.657 [2024-11-20 10:24:42.342727] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:01.657 [2024-11-20 10:24:42.342777] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070510 ] 00:07:01.915 [2024-11-20 10:24:42.415546] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:01.915 [2024-11-20 10:24:42.456719] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:02.173 Running I/O for 10 seconds... 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=1091 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 1091 -ge 100 ']' 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.742 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:02.742 [2024-11-20 10:24:43.244302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:02.742 [2024-11-20 10:24:43.244339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.742 [2024-11-20 10:24:43.244349] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:02.742 [2024-11-20 10:24:43.244356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.244364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:02.743 [2024-11-20 10:24:43.244371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.244378] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:02.743 [2024-11-20 10:24:43.244385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.244391] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xddf500 is same with the state(6) to be set 00:07:02.743 [2024-11-20 10:24:43.244742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.244759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.244774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.244782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.244791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.244799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.244808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.244815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.244823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.244829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.244837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.244852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.244861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.244868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.244876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.244882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.244891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.244897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.244906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.244913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.244921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.244928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.244936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.244943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.244950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.244957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.244966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.244972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.244980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.244986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.244994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.743 [2024-11-20 10:24:43.245299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.743 [2024-11-20 10:24:43.245308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:02.744 [2024-11-20 10:24:43.245721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:02.744 [2024-11-20 10:24:43.245729] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xff8810 is same with the state(6) to be set 00:07:02.744 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.744 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:02.744 [2024-11-20 10:24:43.246654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:07:02.744 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:02.744 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:02.744 task offset: 16384 on job bdev=Nvme0n1 fails 00:07:02.744 00:07:02.744 Latency(us) 00:07:02.744 [2024-11-20T09:24:43.475Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:02.744 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:02.744 Job: Nvme0n1 ended in about 0.58 seconds with error 00:07:02.744 Verification LBA range: start 0x0 length 0x400 00:07:02.744 Nvme0n1 : 0.58 1975.87 123.49 109.77 0.00 30047.25 1732.02 27462.70 00:07:02.744 [2024-11-20T09:24:43.475Z] =================================================================================================================== 00:07:02.744 [2024-11-20T09:24:43.475Z] Total : 1975.87 123.49 109.77 0.00 30047.25 1732.02 27462.70 00:07:02.744 [2024-11-20 10:24:43.249106] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:02.744 [2024-11-20 10:24:43.249126] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xddf500 (9): Bad file descriptor 00:07:02.744 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:02.744 10:24:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:02.744 [2024-11-20 10:24:43.301218] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:07:03.681 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3070510 00:07:03.681 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3070510) - No such process 00:07:03.681 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:03.681 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:03.681 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:03.681 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:03.681 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:07:03.681 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:07:03.681 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:07:03.681 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:07:03.681 { 00:07:03.681 "params": { 00:07:03.681 "name": "Nvme$subsystem", 00:07:03.681 "trtype": "$TEST_TRANSPORT", 00:07:03.681 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:03.681 "adrfam": "ipv4", 00:07:03.681 "trsvcid": "$NVMF_PORT", 00:07:03.681 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:03.681 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:03.681 "hdgst": ${hdgst:-false}, 00:07:03.681 "ddgst": ${ddgst:-false} 00:07:03.681 }, 00:07:03.681 "method": "bdev_nvme_attach_controller" 00:07:03.681 } 00:07:03.681 EOF 00:07:03.681 )") 00:07:03.681 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:07:03.681 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:07:03.681 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:07:03.681 10:24:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:07:03.681 "params": { 00:07:03.681 "name": "Nvme0", 00:07:03.681 "trtype": "tcp", 00:07:03.681 "traddr": "10.0.0.2", 00:07:03.681 "adrfam": "ipv4", 00:07:03.681 "trsvcid": "4420", 00:07:03.681 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:03.681 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:03.681 "hdgst": false, 00:07:03.681 "ddgst": false 00:07:03.681 }, 00:07:03.681 "method": "bdev_nvme_attach_controller" 00:07:03.681 }' 00:07:03.681 [2024-11-20 10:24:44.315669] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:03.681 [2024-11-20 10:24:44.315718] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3070782 ] 00:07:03.681 [2024-11-20 10:24:44.391997] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:03.940 [2024-11-20 10:24:44.431224] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:03.940 Running I/O for 1 seconds... 00:07:05.316 2002.00 IOPS, 125.12 MiB/s 00:07:05.316 Latency(us) 00:07:05.316 [2024-11-20T09:24:46.047Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.316 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:05.316 Verification LBA range: start 0x0 length 0x400 00:07:05.316 Nvme0n1 : 1.01 2044.09 127.76 0.00 0.00 30733.05 1451.15 27088.21 00:07:05.316 [2024-11-20T09:24:46.047Z] =================================================================================================================== 00:07:05.316 [2024-11-20T09:24:46.047Z] Total : 2044.09 127.76 0.00 0.00 30733.05 1451.15 27088.21 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:07:05.316 rmmod nvme_tcp 00:07:05.316 rmmod nvme_fabrics 00:07:05.316 rmmod nvme_keyring 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 3070258 ']' 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 3070258 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3070258 ']' 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3070258 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3070258 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3070258' 00:07:05.316 killing process with pid 3070258 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3070258 00:07:05.316 10:24:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3070258 00:07:05.575 [2024-11-20 10:24:46.084387] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:07:05.575 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:07:05.575 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:07:05.575 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@264 -- # local dev 00:07:05.575 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@267 -- # remove_target_ns 00:07:05.575 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:05.575 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:05.575 10:24:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@268 -- # delete_main_bridge 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@130 -- # return 0 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/setup.sh@284 -- # iptr 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@542 -- # iptables-save 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@542 -- # iptables-restore 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:07:07.482 00:07:07.482 real 0m13.308s 00:07:07.482 user 0m23.140s 00:07:07.482 sys 0m5.735s 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.482 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:07.482 ************************************ 00:07:07.482 END TEST nvmf_host_management 00:07:07.482 ************************************ 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:07.741 ************************************ 00:07:07.741 START TEST nvmf_lvol 00:07:07.741 ************************************ 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:07:07.741 * Looking for test storage... 00:07:07.741 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:07.741 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:07.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.742 --rc genhtml_branch_coverage=1 00:07:07.742 --rc genhtml_function_coverage=1 00:07:07.742 --rc genhtml_legend=1 00:07:07.742 --rc geninfo_all_blocks=1 00:07:07.742 --rc geninfo_unexecuted_blocks=1 00:07:07.742 00:07:07.742 ' 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:07.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.742 --rc genhtml_branch_coverage=1 00:07:07.742 --rc genhtml_function_coverage=1 00:07:07.742 --rc genhtml_legend=1 00:07:07.742 --rc geninfo_all_blocks=1 00:07:07.742 --rc geninfo_unexecuted_blocks=1 00:07:07.742 00:07:07.742 ' 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:07.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.742 --rc genhtml_branch_coverage=1 00:07:07.742 --rc genhtml_function_coverage=1 00:07:07.742 --rc genhtml_legend=1 00:07:07.742 --rc geninfo_all_blocks=1 00:07:07.742 --rc geninfo_unexecuted_blocks=1 00:07:07.742 00:07:07.742 ' 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:07.742 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:07.742 --rc genhtml_branch_coverage=1 00:07:07.742 --rc genhtml_function_coverage=1 00:07:07.742 --rc genhtml_legend=1 00:07:07.742 --rc geninfo_all_blocks=1 00:07:07.742 --rc geninfo_unexecuted_blocks=1 00:07:07.742 00:07:07.742 ' 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:07.742 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:07:08.002 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:08.002 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:08.002 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:08.002 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:08.003 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # xtrace_disable 00:07:08.003 10:24:48 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@131 -- # pci_devs=() 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@131 -- # local -a pci_devs 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@132 -- # pci_net_devs=() 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@133 -- # pci_drivers=() 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@133 -- # local -A pci_drivers 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@135 -- # net_devs=() 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@135 -- # local -ga net_devs 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@136 -- # e810=() 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@136 -- # local -ga e810 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@137 -- # x722=() 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@137 -- # local -ga x722 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@138 -- # mlx=() 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@138 -- # local -ga mlx 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:14.577 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:14.577 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:14.578 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:14.578 Found net devices under 0000:86:00.0: cvl_0_0 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:14.578 Found net devices under 0000:86:00.1: cvl_0_1 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@262 -- # is_hw=yes 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@257 -- # create_target_ns 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:07:14.578 10.0.0.1 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:07:14.578 10.0.0.2 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:07:14.578 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 1 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator0 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:07:14.579 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:14.579 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.449 ms 00:07:14.579 00:07:14.579 --- 10.0.0.1 ping statistics --- 00:07:14.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.579 rtt min/avg/max/mdev = 0.449/0.449/0.449/0.000 ms 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:07:14.579 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:14.579 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.245 ms 00:07:14.579 00:07:14.579 --- 10.0.0.2 ping statistics --- 00:07:14.579 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:14.579 rtt min/avg/max/mdev = 0.245/0.245/0.245/0.000 ms 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair++ )) 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # return 0 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator0 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:07:14.579 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator1 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # return 1 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev= 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@169 -- # return 0 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target1 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target1 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@109 -- # return 1 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@168 -- # dev= 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@169 -- # return 0 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=3074634 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 3074634 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3074634 ']' 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:14.580 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:14.580 [2024-11-20 10:24:54.696852] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:14.580 [2024-11-20 10:24:54.696903] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:14.580 [2024-11-20 10:24:54.758223] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:14.580 [2024-11-20 10:24:54.800813] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:14.580 [2024-11-20 10:24:54.800846] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:14.580 [2024-11-20 10:24:54.800853] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:14.580 [2024-11-20 10:24:54.800860] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:14.580 [2024-11-20 10:24:54.800865] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:14.580 [2024-11-20 10:24:54.802299] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:14.580 [2024-11-20 10:24:54.802405] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:14.580 [2024-11-20 10:24:54.802406] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:14.580 10:24:54 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:14.580 [2024-11-20 10:24:55.103032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:14.580 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:14.839 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:07:14.839 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:07:14.839 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:07:14.839 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:07:15.098 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:07:15.357 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=e5e852d9-8df5-438a-a1e1-4e4bef9068ce 00:07:15.357 10:24:55 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u e5e852d9-8df5-438a-a1e1-4e4bef9068ce lvol 20 00:07:15.616 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=96d88df8-97fb-4ce0-a381-ce65699d4ca6 00:07:15.616 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:15.875 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 96d88df8-97fb-4ce0-a381-ce65699d4ca6 00:07:15.875 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:16.133 [2024-11-20 10:24:56.761122] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:16.134 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:16.392 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3075067 00:07:16.392 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:07:16.392 10:24:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:07:17.327 10:24:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 96d88df8-97fb-4ce0-a381-ce65699d4ca6 MY_SNAPSHOT 00:07:17.586 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=4400e9ad-4c9e-4342-ae5c-29bed81d9152 00:07:17.586 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 96d88df8-97fb-4ce0-a381-ce65699d4ca6 30 00:07:17.844 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 4400e9ad-4c9e-4342-ae5c-29bed81d9152 MY_CLONE 00:07:18.102 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=fac2e84b-ccef-451c-bf7d-f27b97de851a 00:07:18.102 10:24:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate fac2e84b-ccef-451c-bf7d-f27b97de851a 00:07:18.667 10:24:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3075067 00:07:26.781 Initializing NVMe Controllers 00:07:26.781 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:07:26.781 Controller IO queue size 128, less than required. 00:07:26.781 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:26.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:07:26.781 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:07:26.781 Initialization complete. Launching workers. 00:07:26.781 ======================================================== 00:07:26.781 Latency(us) 00:07:26.781 Device Information : IOPS MiB/s Average min max 00:07:26.781 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 11806.60 46.12 10844.22 2122.72 67006.96 00:07:26.781 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11979.60 46.80 10687.17 1851.62 71998.96 00:07:26.781 ======================================================== 00:07:26.781 Total : 23786.20 92.91 10765.13 1851.62 71998.96 00:07:26.781 00:07:26.781 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:26.781 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 96d88df8-97fb-4ce0-a381-ce65699d4ca6 00:07:27.040 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u e5e852d9-8df5-438a-a1e1-4e4bef9068ce 00:07:27.299 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:07:27.299 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:07:27.299 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:07:27.299 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:07:27.299 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:07:27.299 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:07:27.299 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:07:27.299 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:07:27.299 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:07:27.299 rmmod nvme_tcp 00:07:27.299 rmmod nvme_fabrics 00:07:27.299 rmmod nvme_keyring 00:07:27.299 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:07:27.299 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:07:27.299 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:07:27.299 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 3074634 ']' 00:07:27.299 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 3074634 00:07:27.299 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3074634 ']' 00:07:27.299 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3074634 00:07:27.299 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:07:27.299 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:27.299 10:25:07 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3074634 00:07:27.299 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:27.299 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:27.299 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3074634' 00:07:27.299 killing process with pid 3074634 00:07:27.299 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3074634 00:07:27.299 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3074634 00:07:27.559 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:07:27.559 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:07:27.559 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@264 -- # local dev 00:07:27.559 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@267 -- # remove_target_ns 00:07:27.559 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:27.559 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:27.559 10:25:08 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@268 -- # delete_main_bridge 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@130 -- # return 0 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/setup.sh@284 -- # iptr 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@542 -- # iptables-save 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:07:30.096 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@542 -- # iptables-restore 00:07:30.096 00:07:30.096 real 0m22.011s 00:07:30.096 user 1m2.805s 00:07:30.097 sys 0m7.787s 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:07:30.097 ************************************ 00:07:30.097 END TEST nvmf_lvol 00:07:30.097 ************************************ 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:30.097 ************************************ 00:07:30.097 START TEST nvmf_lvs_grow 00:07:30.097 ************************************ 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:07:30.097 * Looking for test storage... 00:07:30.097 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:07:30.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.097 --rc genhtml_branch_coverage=1 00:07:30.097 --rc genhtml_function_coverage=1 00:07:30.097 --rc genhtml_legend=1 00:07:30.097 --rc geninfo_all_blocks=1 00:07:30.097 --rc geninfo_unexecuted_blocks=1 00:07:30.097 00:07:30.097 ' 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:07:30.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.097 --rc genhtml_branch_coverage=1 00:07:30.097 --rc genhtml_function_coverage=1 00:07:30.097 --rc genhtml_legend=1 00:07:30.097 --rc geninfo_all_blocks=1 00:07:30.097 --rc geninfo_unexecuted_blocks=1 00:07:30.097 00:07:30.097 ' 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:07:30.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.097 --rc genhtml_branch_coverage=1 00:07:30.097 --rc genhtml_function_coverage=1 00:07:30.097 --rc genhtml_legend=1 00:07:30.097 --rc geninfo_all_blocks=1 00:07:30.097 --rc geninfo_unexecuted_blocks=1 00:07:30.097 00:07:30.097 ' 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:07:30.097 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:30.097 --rc genhtml_branch_coverage=1 00:07:30.097 --rc genhtml_function_coverage=1 00:07:30.097 --rc genhtml_legend=1 00:07:30.097 --rc geninfo_all_blocks=1 00:07:30.097 --rc geninfo_unexecuted_blocks=1 00:07:30.097 00:07:30.097 ' 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.097 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:07:30.098 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # xtrace_disable 00:07:30.098 10:25:10 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@131 -- # pci_devs=() 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@131 -- # local -a pci_devs 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@132 -- # pci_net_devs=() 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@133 -- # pci_drivers=() 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@133 -- # local -A pci_drivers 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@135 -- # net_devs=() 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@135 -- # local -ga net_devs 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@136 -- # e810=() 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@136 -- # local -ga e810 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@137 -- # x722=() 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@137 -- # local -ga x722 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@138 -- # mlx=() 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@138 -- # local -ga mlx 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:36.671 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.671 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:36.672 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:36.672 Found net devices under 0000:86:00.0: cvl_0_0 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:36.672 Found net devices under 0000:86:00.1: cvl_0_1 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@262 -- # is_hw=yes 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # create_target_ns 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:07:36.672 10.0.0.1 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:07:36.672 10.0.0.2 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:07:36.672 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 1 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator0 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:07:36.673 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:36.673 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.388 ms 00:07:36.673 00:07:36.673 --- 10.0.0.1 ping statistics --- 00:07:36.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.673 rtt min/avg/max/mdev = 0.388/0.388/0.388/0.000 ms 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:07:36.673 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:36.673 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:07:36.673 00:07:36.673 --- 10.0.0.2 ping statistics --- 00:07:36.673 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:36.673 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair++ )) 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # return 0 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator0 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator1 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # return 1 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev= 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@169 -- # return 0 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:07:36.673 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target1 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target1 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # return 1 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev= 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@169 -- # return 0 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=3080473 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 3080473 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3080473 ']' 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:36.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:36.674 [2024-11-20 10:25:16.749007] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:36.674 [2024-11-20 10:25:16.749057] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:36.674 [2024-11-20 10:25:16.828911] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:36.674 [2024-11-20 10:25:16.869519] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:36.674 [2024-11-20 10:25:16.869559] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:36.674 [2024-11-20 10:25:16.869566] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:36.674 [2024-11-20 10:25:16.869572] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:36.674 [2024-11-20 10:25:16.869577] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:36.674 [2024-11-20 10:25:16.870141] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:07:36.674 10:25:16 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:36.674 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:36.674 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:07:36.674 [2024-11-20 10:25:17.173001] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:36.674 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:07:36.674 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:36.674 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:36.674 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:36.674 ************************************ 00:07:36.674 START TEST lvs_grow_clean 00:07:36.674 ************************************ 00:07:36.674 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:07:36.674 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:36.674 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:36.674 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:36.674 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:36.674 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:36.674 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:36.674 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:36.674 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:36.674 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:36.933 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:36.933 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:37.190 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=edbfd62c-0f96-4b45-b8dc-1332770fc0f9 00:07:37.191 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:37.191 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edbfd62c-0f96-4b45-b8dc-1332770fc0f9 00:07:37.191 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:37.191 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:37.191 10:25:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u edbfd62c-0f96-4b45-b8dc-1332770fc0f9 lvol 150 00:07:37.448 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=f337a480-5978-4b36-a05c-e08ab75864f0 00:07:37.448 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:37.448 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:37.704 [2024-11-20 10:25:18.203092] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:37.704 [2024-11-20 10:25:18.203144] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:37.704 true 00:07:37.704 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edbfd62c-0f96-4b45-b8dc-1332770fc0f9 00:07:37.704 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:37.704 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:37.704 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:37.962 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 f337a480-5978-4b36-a05c-e08ab75864f0 00:07:38.220 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:38.220 [2024-11-20 10:25:18.941325] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:38.478 10:25:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:38.478 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3080970 00:07:38.478 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:38.478 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:38.478 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3080970 /var/tmp/bdevperf.sock 00:07:38.478 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3080970 ']' 00:07:38.478 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:38.478 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:38.478 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:38.478 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:38.478 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:38.478 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:38.478 [2024-11-20 10:25:19.173703] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:38.478 [2024-11-20 10:25:19.173747] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3080970 ] 00:07:38.737 [2024-11-20 10:25:19.248019] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:38.737 [2024-11-20 10:25:19.288048] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:38.737 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:38.737 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:07:38.737 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:38.995 Nvme0n1 00:07:38.996 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:39.254 [ 00:07:39.254 { 00:07:39.254 "name": "Nvme0n1", 00:07:39.254 "aliases": [ 00:07:39.254 "f337a480-5978-4b36-a05c-e08ab75864f0" 00:07:39.254 ], 00:07:39.254 "product_name": "NVMe disk", 00:07:39.254 "block_size": 4096, 00:07:39.254 "num_blocks": 38912, 00:07:39.254 "uuid": "f337a480-5978-4b36-a05c-e08ab75864f0", 00:07:39.254 "numa_id": 1, 00:07:39.254 "assigned_rate_limits": { 00:07:39.254 "rw_ios_per_sec": 0, 00:07:39.254 "rw_mbytes_per_sec": 0, 00:07:39.254 "r_mbytes_per_sec": 0, 00:07:39.254 "w_mbytes_per_sec": 0 00:07:39.254 }, 00:07:39.254 "claimed": false, 00:07:39.254 "zoned": false, 00:07:39.254 "supported_io_types": { 00:07:39.254 "read": true, 00:07:39.254 "write": true, 00:07:39.254 "unmap": true, 00:07:39.254 "flush": true, 00:07:39.254 "reset": true, 00:07:39.254 "nvme_admin": true, 00:07:39.254 "nvme_io": true, 00:07:39.254 "nvme_io_md": false, 00:07:39.254 "write_zeroes": true, 00:07:39.254 "zcopy": false, 00:07:39.254 "get_zone_info": false, 00:07:39.254 "zone_management": false, 00:07:39.254 "zone_append": false, 00:07:39.254 "compare": true, 00:07:39.254 "compare_and_write": true, 00:07:39.254 "abort": true, 00:07:39.254 "seek_hole": false, 00:07:39.254 "seek_data": false, 00:07:39.254 "copy": true, 00:07:39.254 "nvme_iov_md": false 00:07:39.254 }, 00:07:39.254 "memory_domains": [ 00:07:39.254 { 00:07:39.254 "dma_device_id": "system", 00:07:39.254 "dma_device_type": 1 00:07:39.254 } 00:07:39.254 ], 00:07:39.254 "driver_specific": { 00:07:39.254 "nvme": [ 00:07:39.254 { 00:07:39.254 "trid": { 00:07:39.254 "trtype": "TCP", 00:07:39.254 "adrfam": "IPv4", 00:07:39.254 "traddr": "10.0.0.2", 00:07:39.254 "trsvcid": "4420", 00:07:39.254 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:39.254 }, 00:07:39.254 "ctrlr_data": { 00:07:39.254 "cntlid": 1, 00:07:39.254 "vendor_id": "0x8086", 00:07:39.254 "model_number": "SPDK bdev Controller", 00:07:39.254 "serial_number": "SPDK0", 00:07:39.254 "firmware_revision": "25.01", 00:07:39.254 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:39.254 "oacs": { 00:07:39.254 "security": 0, 00:07:39.254 "format": 0, 00:07:39.254 "firmware": 0, 00:07:39.254 "ns_manage": 0 00:07:39.254 }, 00:07:39.254 "multi_ctrlr": true, 00:07:39.254 "ana_reporting": false 00:07:39.254 }, 00:07:39.254 "vs": { 00:07:39.254 "nvme_version": "1.3" 00:07:39.254 }, 00:07:39.254 "ns_data": { 00:07:39.254 "id": 1, 00:07:39.254 "can_share": true 00:07:39.254 } 00:07:39.254 } 00:07:39.254 ], 00:07:39.254 "mp_policy": "active_passive" 00:07:39.254 } 00:07:39.254 } 00:07:39.254 ] 00:07:39.254 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3081197 00:07:39.254 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:39.254 10:25:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:39.254 Running I/O for 10 seconds... 00:07:40.631 Latency(us) 00:07:40.631 [2024-11-20T09:25:21.362Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:40.631 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:40.631 Nvme0n1 : 1.00 23099.00 90.23 0.00 0.00 0.00 0.00 0.00 00:07:40.631 [2024-11-20T09:25:21.362Z] =================================================================================================================== 00:07:40.631 [2024-11-20T09:25:21.362Z] Total : 23099.00 90.23 0.00 0.00 0.00 0.00 0.00 00:07:40.631 00:07:41.197 10:25:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u edbfd62c-0f96-4b45-b8dc-1332770fc0f9 00:07:41.456 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:41.456 Nvme0n1 : 2.00 23417.50 91.47 0.00 0.00 0.00 0.00 0.00 00:07:41.456 [2024-11-20T09:25:22.187Z] =================================================================================================================== 00:07:41.456 [2024-11-20T09:25:22.187Z] Total : 23417.50 91.47 0.00 0.00 0.00 0.00 0.00 00:07:41.456 00:07:41.456 true 00:07:41.456 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edbfd62c-0f96-4b45-b8dc-1332770fc0f9 00:07:41.456 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:41.714 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:41.714 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:41.715 10:25:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3081197 00:07:42.282 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:42.282 Nvme0n1 : 3.00 23514.33 91.85 0.00 0.00 0.00 0.00 0.00 00:07:42.282 [2024-11-20T09:25:23.013Z] =================================================================================================================== 00:07:42.282 [2024-11-20T09:25:23.013Z] Total : 23514.33 91.85 0.00 0.00 0.00 0.00 0.00 00:07:42.282 00:07:43.710 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:43.710 Nvme0n1 : 4.00 23590.75 92.15 0.00 0.00 0.00 0.00 0.00 00:07:43.710 [2024-11-20T09:25:24.441Z] =================================================================================================================== 00:07:43.710 [2024-11-20T09:25:24.441Z] Total : 23590.75 92.15 0.00 0.00 0.00 0.00 0.00 00:07:43.710 00:07:44.322 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:44.322 Nvme0n1 : 5.00 23639.80 92.34 0.00 0.00 0.00 0.00 0.00 00:07:44.322 [2024-11-20T09:25:25.053Z] =================================================================================================================== 00:07:44.322 [2024-11-20T09:25:25.053Z] Total : 23639.80 92.34 0.00 0.00 0.00 0.00 0.00 00:07:44.322 00:07:45.255 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:45.255 Nvme0n1 : 6.00 23702.00 92.59 0.00 0.00 0.00 0.00 0.00 00:07:45.255 [2024-11-20T09:25:25.986Z] =================================================================================================================== 00:07:45.255 [2024-11-20T09:25:25.986Z] Total : 23702.00 92.59 0.00 0.00 0.00 0.00 0.00 00:07:45.255 00:07:46.631 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:46.631 Nvme0n1 : 7.00 23737.57 92.72 0.00 0.00 0.00 0.00 0.00 00:07:46.631 [2024-11-20T09:25:27.362Z] =================================================================================================================== 00:07:46.631 [2024-11-20T09:25:27.362Z] Total : 23737.57 92.72 0.00 0.00 0.00 0.00 0.00 00:07:46.631 00:07:47.569 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:47.569 Nvme0n1 : 8.00 23778.88 92.89 0.00 0.00 0.00 0.00 0.00 00:07:47.569 [2024-11-20T09:25:28.300Z] =================================================================================================================== 00:07:47.569 [2024-11-20T09:25:28.300Z] Total : 23778.88 92.89 0.00 0.00 0.00 0.00 0.00 00:07:47.569 00:07:48.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:48.503 Nvme0n1 : 9.00 23811.00 93.01 0.00 0.00 0.00 0.00 0.00 00:07:48.503 [2024-11-20T09:25:29.234Z] =================================================================================================================== 00:07:48.503 [2024-11-20T09:25:29.234Z] Total : 23811.00 93.01 0.00 0.00 0.00 0.00 0.00 00:07:48.503 00:07:49.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.440 Nvme0n1 : 10.00 23831.20 93.09 0.00 0.00 0.00 0.00 0.00 00:07:49.440 [2024-11-20T09:25:30.171Z] =================================================================================================================== 00:07:49.440 [2024-11-20T09:25:30.171Z] Total : 23831.20 93.09 0.00 0.00 0.00 0.00 0.00 00:07:49.440 00:07:49.440 00:07:49.440 Latency(us) 00:07:49.440 [2024-11-20T09:25:30.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.440 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:49.440 Nvme0n1 : 10.00 23836.38 93.11 0.00 0.00 5367.00 1427.75 11421.99 00:07:49.440 [2024-11-20T09:25:30.171Z] =================================================================================================================== 00:07:49.440 [2024-11-20T09:25:30.171Z] Total : 23836.38 93.11 0.00 0.00 5367.00 1427.75 11421.99 00:07:49.440 { 00:07:49.440 "results": [ 00:07:49.440 { 00:07:49.440 "job": "Nvme0n1", 00:07:49.440 "core_mask": "0x2", 00:07:49.440 "workload": "randwrite", 00:07:49.440 "status": "finished", 00:07:49.440 "queue_depth": 128, 00:07:49.440 "io_size": 4096, 00:07:49.440 "runtime": 10.003198, 00:07:49.440 "iops": 23836.377126594914, 00:07:49.440 "mibps": 93.11084815076138, 00:07:49.440 "io_failed": 0, 00:07:49.440 "io_timeout": 0, 00:07:49.440 "avg_latency_us": 5366.996561331193, 00:07:49.440 "min_latency_us": 1427.7485714285715, 00:07:49.440 "max_latency_us": 11421.988571428572 00:07:49.440 } 00:07:49.440 ], 00:07:49.440 "core_count": 1 00:07:49.440 } 00:07:49.440 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3080970 00:07:49.440 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3080970 ']' 00:07:49.440 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3080970 00:07:49.440 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:07:49.440 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:49.440 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3080970 00:07:49.440 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:07:49.440 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:07:49.440 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3080970' 00:07:49.440 killing process with pid 3080970 00:07:49.440 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3080970 00:07:49.440 Received shutdown signal, test time was about 10.000000 seconds 00:07:49.440 00:07:49.440 Latency(us) 00:07:49.440 [2024-11-20T09:25:30.171Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:49.440 [2024-11-20T09:25:30.171Z] =================================================================================================================== 00:07:49.440 [2024-11-20T09:25:30.171Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:07:49.440 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3080970 00:07:49.698 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:49.957 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:07:49.957 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edbfd62c-0f96-4b45-b8dc-1332770fc0f9 00:07:49.957 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:07:50.216 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:07:50.216 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:07:50.216 10:25:30 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:50.475 [2024-11-20 10:25:31.036692] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:07:50.475 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edbfd62c-0f96-4b45-b8dc-1332770fc0f9 00:07:50.475 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:07:50.475 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edbfd62c-0f96-4b45-b8dc-1332770fc0f9 00:07:50.475 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:50.475 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.475 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:50.475 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.475 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:50.475 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:07:50.475 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:07:50.475 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:07:50.475 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edbfd62c-0f96-4b45-b8dc-1332770fc0f9 00:07:50.734 request: 00:07:50.734 { 00:07:50.734 "uuid": "edbfd62c-0f96-4b45-b8dc-1332770fc0f9", 00:07:50.734 "method": "bdev_lvol_get_lvstores", 00:07:50.734 "req_id": 1 00:07:50.734 } 00:07:50.734 Got JSON-RPC error response 00:07:50.734 response: 00:07:50.734 { 00:07:50.734 "code": -19, 00:07:50.734 "message": "No such device" 00:07:50.734 } 00:07:50.734 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:07:50.734 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:07:50.734 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:07:50.734 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:07:50.734 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:50.734 aio_bdev 00:07:50.734 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev f337a480-5978-4b36-a05c-e08ab75864f0 00:07:50.734 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=f337a480-5978-4b36-a05c-e08ab75864f0 00:07:50.734 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:07:50.734 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:07:50.734 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:07:50.734 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:07:50.734 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:07:50.992 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b f337a480-5978-4b36-a05c-e08ab75864f0 -t 2000 00:07:51.251 [ 00:07:51.251 { 00:07:51.251 "name": "f337a480-5978-4b36-a05c-e08ab75864f0", 00:07:51.251 "aliases": [ 00:07:51.251 "lvs/lvol" 00:07:51.251 ], 00:07:51.251 "product_name": "Logical Volume", 00:07:51.251 "block_size": 4096, 00:07:51.251 "num_blocks": 38912, 00:07:51.251 "uuid": "f337a480-5978-4b36-a05c-e08ab75864f0", 00:07:51.251 "assigned_rate_limits": { 00:07:51.251 "rw_ios_per_sec": 0, 00:07:51.251 "rw_mbytes_per_sec": 0, 00:07:51.251 "r_mbytes_per_sec": 0, 00:07:51.251 "w_mbytes_per_sec": 0 00:07:51.251 }, 00:07:51.251 "claimed": false, 00:07:51.251 "zoned": false, 00:07:51.251 "supported_io_types": { 00:07:51.251 "read": true, 00:07:51.251 "write": true, 00:07:51.251 "unmap": true, 00:07:51.251 "flush": false, 00:07:51.251 "reset": true, 00:07:51.251 "nvme_admin": false, 00:07:51.251 "nvme_io": false, 00:07:51.251 "nvme_io_md": false, 00:07:51.251 "write_zeroes": true, 00:07:51.251 "zcopy": false, 00:07:51.251 "get_zone_info": false, 00:07:51.251 "zone_management": false, 00:07:51.251 "zone_append": false, 00:07:51.251 "compare": false, 00:07:51.251 "compare_and_write": false, 00:07:51.251 "abort": false, 00:07:51.251 "seek_hole": true, 00:07:51.251 "seek_data": true, 00:07:51.251 "copy": false, 00:07:51.252 "nvme_iov_md": false 00:07:51.252 }, 00:07:51.252 "driver_specific": { 00:07:51.252 "lvol": { 00:07:51.252 "lvol_store_uuid": "edbfd62c-0f96-4b45-b8dc-1332770fc0f9", 00:07:51.252 "base_bdev": "aio_bdev", 00:07:51.252 "thin_provision": false, 00:07:51.252 "num_allocated_clusters": 38, 00:07:51.252 "snapshot": false, 00:07:51.252 "clone": false, 00:07:51.252 "esnap_clone": false 00:07:51.252 } 00:07:51.252 } 00:07:51.252 } 00:07:51.252 ] 00:07:51.252 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:07:51.252 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edbfd62c-0f96-4b45-b8dc-1332770fc0f9 00:07:51.252 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:07:51.510 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:07:51.510 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u edbfd62c-0f96-4b45-b8dc-1332770fc0f9 00:07:51.510 10:25:31 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:07:51.510 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:07:51.510 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete f337a480-5978-4b36-a05c-e08ab75864f0 00:07:51.768 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u edbfd62c-0f96-4b45-b8dc-1332770fc0f9 00:07:52.027 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:07:52.286 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:52.286 00:07:52.286 real 0m15.556s 00:07:52.286 user 0m15.043s 00:07:52.286 sys 0m1.530s 00:07:52.286 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:52.286 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:07:52.286 ************************************ 00:07:52.286 END TEST lvs_grow_clean 00:07:52.286 ************************************ 00:07:52.286 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:07:52.286 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:07:52.286 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:52.286 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:07:52.286 ************************************ 00:07:52.286 START TEST lvs_grow_dirty 00:07:52.286 ************************************ 00:07:52.286 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:07:52.286 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:07:52.286 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:07:52.286 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:07:52.286 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:07:52.286 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:07:52.286 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:07:52.286 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:52.286 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:52.286 10:25:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:07:52.545 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:07:52.545 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:07:52.545 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=82aa1056-ea30-4807-aa19-eeb28e14ca09 00:07:52.545 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82aa1056-ea30-4807-aa19-eeb28e14ca09 00:07:52.545 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:07:52.803 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:07:52.804 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:07:52.804 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 82aa1056-ea30-4807-aa19-eeb28e14ca09 lvol 150 00:07:53.062 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=8edf2682-6d13-4c5a-af1c-43b69bdac9dd 00:07:53.062 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:07:53.062 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:07:53.321 [2024-11-20 10:25:33.797046] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:07:53.321 [2024-11-20 10:25:33.797098] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:07:53.321 true 00:07:53.321 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82aa1056-ea30-4807-aa19-eeb28e14ca09 00:07:53.321 10:25:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:07:53.321 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:07:53.321 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:07:53.580 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 8edf2682-6d13-4c5a-af1c-43b69bdac9dd 00:07:53.839 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:07:53.839 [2024-11-20 10:25:34.523197] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:53.839 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:07:54.097 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:07:54.097 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3083618 00:07:54.097 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:54.097 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3083618 /var/tmp/bdevperf.sock 00:07:54.097 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3083618 ']' 00:07:54.097 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:54.097 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:54.097 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:54.097 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:54.097 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:54.097 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:07:54.097 [2024-11-20 10:25:34.750590] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:07:54.097 [2024-11-20 10:25:34.750633] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3083618 ] 00:07:54.355 [2024-11-20 10:25:34.824805] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:54.355 [2024-11-20 10:25:34.867562] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:07:54.355 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:54.355 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:07:54.355 10:25:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:07:54.613 Nvme0n1 00:07:54.613 10:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:07:54.872 [ 00:07:54.872 { 00:07:54.872 "name": "Nvme0n1", 00:07:54.872 "aliases": [ 00:07:54.872 "8edf2682-6d13-4c5a-af1c-43b69bdac9dd" 00:07:54.872 ], 00:07:54.872 "product_name": "NVMe disk", 00:07:54.872 "block_size": 4096, 00:07:54.872 "num_blocks": 38912, 00:07:54.872 "uuid": "8edf2682-6d13-4c5a-af1c-43b69bdac9dd", 00:07:54.872 "numa_id": 1, 00:07:54.872 "assigned_rate_limits": { 00:07:54.872 "rw_ios_per_sec": 0, 00:07:54.872 "rw_mbytes_per_sec": 0, 00:07:54.872 "r_mbytes_per_sec": 0, 00:07:54.872 "w_mbytes_per_sec": 0 00:07:54.872 }, 00:07:54.872 "claimed": false, 00:07:54.872 "zoned": false, 00:07:54.872 "supported_io_types": { 00:07:54.872 "read": true, 00:07:54.872 "write": true, 00:07:54.872 "unmap": true, 00:07:54.872 "flush": true, 00:07:54.872 "reset": true, 00:07:54.872 "nvme_admin": true, 00:07:54.872 "nvme_io": true, 00:07:54.872 "nvme_io_md": false, 00:07:54.872 "write_zeroes": true, 00:07:54.872 "zcopy": false, 00:07:54.872 "get_zone_info": false, 00:07:54.872 "zone_management": false, 00:07:54.872 "zone_append": false, 00:07:54.872 "compare": true, 00:07:54.872 "compare_and_write": true, 00:07:54.872 "abort": true, 00:07:54.872 "seek_hole": false, 00:07:54.872 "seek_data": false, 00:07:54.872 "copy": true, 00:07:54.872 "nvme_iov_md": false 00:07:54.872 }, 00:07:54.872 "memory_domains": [ 00:07:54.872 { 00:07:54.872 "dma_device_id": "system", 00:07:54.872 "dma_device_type": 1 00:07:54.872 } 00:07:54.872 ], 00:07:54.872 "driver_specific": { 00:07:54.872 "nvme": [ 00:07:54.872 { 00:07:54.872 "trid": { 00:07:54.872 "trtype": "TCP", 00:07:54.872 "adrfam": "IPv4", 00:07:54.872 "traddr": "10.0.0.2", 00:07:54.872 "trsvcid": "4420", 00:07:54.872 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:07:54.872 }, 00:07:54.872 "ctrlr_data": { 00:07:54.872 "cntlid": 1, 00:07:54.872 "vendor_id": "0x8086", 00:07:54.872 "model_number": "SPDK bdev Controller", 00:07:54.872 "serial_number": "SPDK0", 00:07:54.872 "firmware_revision": "25.01", 00:07:54.872 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:54.872 "oacs": { 00:07:54.872 "security": 0, 00:07:54.872 "format": 0, 00:07:54.872 "firmware": 0, 00:07:54.872 "ns_manage": 0 00:07:54.872 }, 00:07:54.872 "multi_ctrlr": true, 00:07:54.872 "ana_reporting": false 00:07:54.872 }, 00:07:54.872 "vs": { 00:07:54.872 "nvme_version": "1.3" 00:07:54.872 }, 00:07:54.872 "ns_data": { 00:07:54.872 "id": 1, 00:07:54.872 "can_share": true 00:07:54.872 } 00:07:54.872 } 00:07:54.872 ], 00:07:54.872 "mp_policy": "active_passive" 00:07:54.872 } 00:07:54.872 } 00:07:54.872 ] 00:07:54.872 10:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3083806 00:07:54.872 10:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:07:54.872 10:25:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:07:54.872 Running I/O for 10 seconds... 00:07:55.808 Latency(us) 00:07:55.808 [2024-11-20T09:25:36.539Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:55.808 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:55.808 Nvme0n1 : 1.00 23510.00 91.84 0.00 0.00 0.00 0.00 0.00 00:07:55.808 [2024-11-20T09:25:36.539Z] =================================================================================================================== 00:07:55.808 [2024-11-20T09:25:36.539Z] Total : 23510.00 91.84 0.00 0.00 0.00 0.00 0.00 00:07:55.808 00:07:56.743 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 82aa1056-ea30-4807-aa19-eeb28e14ca09 00:07:57.002 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.002 Nvme0n1 : 2.00 23663.00 92.43 0.00 0.00 0.00 0.00 0.00 00:07:57.002 [2024-11-20T09:25:37.733Z] =================================================================================================================== 00:07:57.002 [2024-11-20T09:25:37.733Z] Total : 23663.00 92.43 0.00 0.00 0.00 0.00 0.00 00:07:57.002 00:07:57.002 true 00:07:57.002 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82aa1056-ea30-4807-aa19-eeb28e14ca09 00:07:57.002 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:07:57.260 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:07:57.260 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:07:57.260 10:25:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3083806 00:07:57.827 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:57.827 Nvme0n1 : 3.00 23674.33 92.48 0.00 0.00 0.00 0.00 0.00 00:07:57.827 [2024-11-20T09:25:38.558Z] =================================================================================================================== 00:07:57.827 [2024-11-20T09:25:38.558Z] Total : 23674.33 92.48 0.00 0.00 0.00 0.00 0.00 00:07:57.827 00:07:59.201 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:07:59.201 Nvme0n1 : 4.00 23695.25 92.56 0.00 0.00 0.00 0.00 0.00 00:07:59.201 [2024-11-20T09:25:39.932Z] =================================================================================================================== 00:07:59.201 [2024-11-20T09:25:39.932Z] Total : 23695.25 92.56 0.00 0.00 0.00 0.00 0.00 00:07:59.201 00:08:00.136 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:00.136 Nvme0n1 : 5.00 23669.40 92.46 0.00 0.00 0.00 0.00 0.00 00:08:00.136 [2024-11-20T09:25:40.867Z] =================================================================================================================== 00:08:00.136 [2024-11-20T09:25:40.867Z] Total : 23669.40 92.46 0.00 0.00 0.00 0.00 0.00 00:08:00.136 00:08:01.071 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:01.071 Nvme0n1 : 6.00 23695.17 92.56 0.00 0.00 0.00 0.00 0.00 00:08:01.071 [2024-11-20T09:25:41.802Z] =================================================================================================================== 00:08:01.071 [2024-11-20T09:25:41.802Z] Total : 23695.17 92.56 0.00 0.00 0.00 0.00 0.00 00:08:01.071 00:08:02.006 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.006 Nvme0n1 : 7.00 23739.29 92.73 0.00 0.00 0.00 0.00 0.00 00:08:02.006 [2024-11-20T09:25:42.737Z] =================================================================================================================== 00:08:02.006 [2024-11-20T09:25:42.737Z] Total : 23739.29 92.73 0.00 0.00 0.00 0.00 0.00 00:08:02.006 00:08:02.941 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:02.941 Nvme0n1 : 8.00 23765.12 92.83 0.00 0.00 0.00 0.00 0.00 00:08:02.941 [2024-11-20T09:25:43.672Z] =================================================================================================================== 00:08:02.941 [2024-11-20T09:25:43.672Z] Total : 23765.12 92.83 0.00 0.00 0.00 0.00 0.00 00:08:02.941 00:08:03.876 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:03.876 Nvme0n1 : 9.00 23785.11 92.91 0.00 0.00 0.00 0.00 0.00 00:08:03.876 [2024-11-20T09:25:44.607Z] =================================================================================================================== 00:08:03.876 [2024-11-20T09:25:44.607Z] Total : 23785.11 92.91 0.00 0.00 0.00 0.00 0.00 00:08:03.876 00:08:05.252 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.252 Nvme0n1 : 10.00 23806.90 93.00 0.00 0.00 0.00 0.00 0.00 00:08:05.252 [2024-11-20T09:25:45.983Z] =================================================================================================================== 00:08:05.252 [2024-11-20T09:25:45.983Z] Total : 23806.90 93.00 0.00 0.00 0.00 0.00 0.00 00:08:05.252 00:08:05.252 00:08:05.252 Latency(us) 00:08:05.252 [2024-11-20T09:25:45.983Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.252 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:05.252 Nvme0n1 : 10.00 23804.34 92.99 0.00 0.00 5373.87 1544.78 10797.84 00:08:05.252 [2024-11-20T09:25:45.983Z] =================================================================================================================== 00:08:05.252 [2024-11-20T09:25:45.983Z] Total : 23804.34 92.99 0.00 0.00 5373.87 1544.78 10797.84 00:08:05.252 { 00:08:05.252 "results": [ 00:08:05.252 { 00:08:05.252 "job": "Nvme0n1", 00:08:05.252 "core_mask": "0x2", 00:08:05.252 "workload": "randwrite", 00:08:05.252 "status": "finished", 00:08:05.252 "queue_depth": 128, 00:08:05.252 "io_size": 4096, 00:08:05.253 "runtime": 10.003808, 00:08:05.253 "iops": 23804.33530911429, 00:08:05.253 "mibps": 92.98568480122769, 00:08:05.253 "io_failed": 0, 00:08:05.253 "io_timeout": 0, 00:08:05.253 "avg_latency_us": 5373.868727131223, 00:08:05.253 "min_latency_us": 1544.777142857143, 00:08:05.253 "max_latency_us": 10797.83619047619 00:08:05.253 } 00:08:05.253 ], 00:08:05.253 "core_count": 1 00:08:05.253 } 00:08:05.253 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3083618 00:08:05.253 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3083618 ']' 00:08:05.253 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3083618 00:08:05.253 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:08:05.253 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:05.253 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3083618 00:08:05.253 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:05.253 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:05.253 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3083618' 00:08:05.253 killing process with pid 3083618 00:08:05.253 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3083618 00:08:05.253 Received shutdown signal, test time was about 10.000000 seconds 00:08:05.253 00:08:05.253 Latency(us) 00:08:05.253 [2024-11-20T09:25:45.984Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:05.253 [2024-11-20T09:25:45.984Z] =================================================================================================================== 00:08:05.253 [2024-11-20T09:25:45.984Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:05.253 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3083618 00:08:05.253 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:05.511 10:25:45 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:05.511 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82aa1056-ea30-4807-aa19-eeb28e14ca09 00:08:05.511 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:05.771 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:05.771 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:08:05.771 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3080473 00:08:05.771 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3080473 00:08:05.771 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3080473 Killed "${NVMF_APP[@]}" "$@" 00:08:05.771 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:08:05.771 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:08:05.771 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:05.771 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:05.771 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:05.771 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=3085658 00:08:05.771 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 3085658 00:08:05.771 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:05.771 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3085658 ']' 00:08:05.771 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:05.771 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:05.771 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:05.771 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:05.771 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:05.771 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:05.771 [2024-11-20 10:25:46.491804] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:05.771 [2024-11-20 10:25:46.491851] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:06.030 [2024-11-20 10:25:46.571329] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:06.030 [2024-11-20 10:25:46.611355] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:06.030 [2024-11-20 10:25:46.611390] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:06.030 [2024-11-20 10:25:46.611397] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:06.030 [2024-11-20 10:25:46.611403] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:06.030 [2024-11-20 10:25:46.611408] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:06.030 [2024-11-20 10:25:46.611985] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:06.030 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:06.030 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:08:06.030 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:06.030 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:06.030 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:06.030 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:06.030 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:06.289 [2024-11-20 10:25:46.913036] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:08:06.289 [2024-11-20 10:25:46.913126] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:08:06.289 [2024-11-20 10:25:46.913152] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:08:06.289 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:08:06.289 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 8edf2682-6d13-4c5a-af1c-43b69bdac9dd 00:08:06.289 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8edf2682-6d13-4c5a-af1c-43b69bdac9dd 00:08:06.289 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:06.289 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:06.289 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:06.289 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:06.289 10:25:46 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:06.546 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8edf2682-6d13-4c5a-af1c-43b69bdac9dd -t 2000 00:08:06.805 [ 00:08:06.805 { 00:08:06.805 "name": "8edf2682-6d13-4c5a-af1c-43b69bdac9dd", 00:08:06.805 "aliases": [ 00:08:06.805 "lvs/lvol" 00:08:06.805 ], 00:08:06.805 "product_name": "Logical Volume", 00:08:06.805 "block_size": 4096, 00:08:06.805 "num_blocks": 38912, 00:08:06.805 "uuid": "8edf2682-6d13-4c5a-af1c-43b69bdac9dd", 00:08:06.805 "assigned_rate_limits": { 00:08:06.805 "rw_ios_per_sec": 0, 00:08:06.805 "rw_mbytes_per_sec": 0, 00:08:06.805 "r_mbytes_per_sec": 0, 00:08:06.805 "w_mbytes_per_sec": 0 00:08:06.805 }, 00:08:06.805 "claimed": false, 00:08:06.805 "zoned": false, 00:08:06.805 "supported_io_types": { 00:08:06.805 "read": true, 00:08:06.805 "write": true, 00:08:06.805 "unmap": true, 00:08:06.805 "flush": false, 00:08:06.805 "reset": true, 00:08:06.805 "nvme_admin": false, 00:08:06.805 "nvme_io": false, 00:08:06.805 "nvme_io_md": false, 00:08:06.805 "write_zeroes": true, 00:08:06.805 "zcopy": false, 00:08:06.805 "get_zone_info": false, 00:08:06.805 "zone_management": false, 00:08:06.805 "zone_append": false, 00:08:06.805 "compare": false, 00:08:06.805 "compare_and_write": false, 00:08:06.805 "abort": false, 00:08:06.805 "seek_hole": true, 00:08:06.805 "seek_data": true, 00:08:06.805 "copy": false, 00:08:06.805 "nvme_iov_md": false 00:08:06.805 }, 00:08:06.805 "driver_specific": { 00:08:06.805 "lvol": { 00:08:06.805 "lvol_store_uuid": "82aa1056-ea30-4807-aa19-eeb28e14ca09", 00:08:06.805 "base_bdev": "aio_bdev", 00:08:06.805 "thin_provision": false, 00:08:06.805 "num_allocated_clusters": 38, 00:08:06.805 "snapshot": false, 00:08:06.805 "clone": false, 00:08:06.805 "esnap_clone": false 00:08:06.805 } 00:08:06.805 } 00:08:06.805 } 00:08:06.805 ] 00:08:06.805 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:06.805 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82aa1056-ea30-4807-aa19-eeb28e14ca09 00:08:06.805 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:08:06.805 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:08:06.805 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82aa1056-ea30-4807-aa19-eeb28e14ca09 00:08:06.805 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:08:07.064 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:08:07.064 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:07.323 [2024-11-20 10:25:47.866244] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:07.323 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82aa1056-ea30-4807-aa19-eeb28e14ca09 00:08:07.323 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:08:07.323 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82aa1056-ea30-4807-aa19-eeb28e14ca09 00:08:07.323 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.323 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.323 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.323 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.323 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.323 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:08:07.323 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:07.323 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:07.323 10:25:47 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82aa1056-ea30-4807-aa19-eeb28e14ca09 00:08:07.582 request: 00:08:07.582 { 00:08:07.582 "uuid": "82aa1056-ea30-4807-aa19-eeb28e14ca09", 00:08:07.582 "method": "bdev_lvol_get_lvstores", 00:08:07.582 "req_id": 1 00:08:07.582 } 00:08:07.582 Got JSON-RPC error response 00:08:07.582 response: 00:08:07.582 { 00:08:07.582 "code": -19, 00:08:07.582 "message": "No such device" 00:08:07.582 } 00:08:07.582 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:08:07.582 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:08:07.582 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:08:07.582 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:08:07.582 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:07.582 aio_bdev 00:08:07.582 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 8edf2682-6d13-4c5a-af1c-43b69bdac9dd 00:08:07.582 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=8edf2682-6d13-4c5a-af1c-43b69bdac9dd 00:08:07.582 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:08:07.582 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:08:07.582 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:08:07.582 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:08:07.582 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:07.840 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 8edf2682-6d13-4c5a-af1c-43b69bdac9dd -t 2000 00:08:08.100 [ 00:08:08.100 { 00:08:08.100 "name": "8edf2682-6d13-4c5a-af1c-43b69bdac9dd", 00:08:08.100 "aliases": [ 00:08:08.100 "lvs/lvol" 00:08:08.100 ], 00:08:08.100 "product_name": "Logical Volume", 00:08:08.100 "block_size": 4096, 00:08:08.100 "num_blocks": 38912, 00:08:08.100 "uuid": "8edf2682-6d13-4c5a-af1c-43b69bdac9dd", 00:08:08.100 "assigned_rate_limits": { 00:08:08.100 "rw_ios_per_sec": 0, 00:08:08.100 "rw_mbytes_per_sec": 0, 00:08:08.100 "r_mbytes_per_sec": 0, 00:08:08.100 "w_mbytes_per_sec": 0 00:08:08.100 }, 00:08:08.100 "claimed": false, 00:08:08.100 "zoned": false, 00:08:08.100 "supported_io_types": { 00:08:08.100 "read": true, 00:08:08.100 "write": true, 00:08:08.100 "unmap": true, 00:08:08.100 "flush": false, 00:08:08.100 "reset": true, 00:08:08.100 "nvme_admin": false, 00:08:08.100 "nvme_io": false, 00:08:08.100 "nvme_io_md": false, 00:08:08.100 "write_zeroes": true, 00:08:08.100 "zcopy": false, 00:08:08.100 "get_zone_info": false, 00:08:08.100 "zone_management": false, 00:08:08.100 "zone_append": false, 00:08:08.100 "compare": false, 00:08:08.100 "compare_and_write": false, 00:08:08.100 "abort": false, 00:08:08.100 "seek_hole": true, 00:08:08.100 "seek_data": true, 00:08:08.100 "copy": false, 00:08:08.100 "nvme_iov_md": false 00:08:08.100 }, 00:08:08.100 "driver_specific": { 00:08:08.100 "lvol": { 00:08:08.100 "lvol_store_uuid": "82aa1056-ea30-4807-aa19-eeb28e14ca09", 00:08:08.100 "base_bdev": "aio_bdev", 00:08:08.100 "thin_provision": false, 00:08:08.100 "num_allocated_clusters": 38, 00:08:08.100 "snapshot": false, 00:08:08.100 "clone": false, 00:08:08.100 "esnap_clone": false 00:08:08.100 } 00:08:08.100 } 00:08:08.100 } 00:08:08.100 ] 00:08:08.100 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:08:08.100 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82aa1056-ea30-4807-aa19-eeb28e14ca09 00:08:08.100 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:08.360 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:08.360 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 82aa1056-ea30-4807-aa19-eeb28e14ca09 00:08:08.360 10:25:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:08.360 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:08.360 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 8edf2682-6d13-4c5a-af1c-43b69bdac9dd 00:08:08.625 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 82aa1056-ea30-4807-aa19-eeb28e14ca09 00:08:08.883 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:08.883 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:09.142 00:08:09.142 real 0m16.764s 00:08:09.142 user 0m43.513s 00:08:09.142 sys 0m3.638s 00:08:09.142 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:09.142 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:09.142 ************************************ 00:08:09.142 END TEST lvs_grow_dirty 00:08:09.142 ************************************ 00:08:09.142 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:08:09.142 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:08:09.142 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:08:09.142 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:08:09.142 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:08:09.142 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:08:09.143 nvmf_trace.0 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:08:09.143 rmmod nvme_tcp 00:08:09.143 rmmod nvme_fabrics 00:08:09.143 rmmod nvme_keyring 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 3085658 ']' 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 3085658 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3085658 ']' 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3085658 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3085658 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3085658' 00:08:09.143 killing process with pid 3085658 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3085658 00:08:09.143 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3085658 00:08:09.402 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:08:09.402 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:08:09.402 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@264 -- # local dev 00:08:09.402 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@267 -- # remove_target_ns 00:08:09.402 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:09.402 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:09.402 10:25:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@268 -- # delete_main_bridge 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # return 0 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/setup.sh@284 -- # iptr 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@542 -- # iptables-save 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@542 -- # iptables-restore 00:08:11.941 00:08:11.941 real 0m41.726s 00:08:11.941 user 1m4.206s 00:08:11.941 sys 0m10.220s 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:11.941 ************************************ 00:08:11.941 END TEST nvmf_lvs_grow 00:08:11.941 ************************************ 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@24 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:11.941 ************************************ 00:08:11.941 START TEST nvmf_bdev_io_wait 00:08:11.941 ************************************ 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:08:11.941 * Looking for test storage... 00:08:11.941 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:08:11.941 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:11.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.942 --rc genhtml_branch_coverage=1 00:08:11.942 --rc genhtml_function_coverage=1 00:08:11.942 --rc genhtml_legend=1 00:08:11.942 --rc geninfo_all_blocks=1 00:08:11.942 --rc geninfo_unexecuted_blocks=1 00:08:11.942 00:08:11.942 ' 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:11.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.942 --rc genhtml_branch_coverage=1 00:08:11.942 --rc genhtml_function_coverage=1 00:08:11.942 --rc genhtml_legend=1 00:08:11.942 --rc geninfo_all_blocks=1 00:08:11.942 --rc geninfo_unexecuted_blocks=1 00:08:11.942 00:08:11.942 ' 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:11.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.942 --rc genhtml_branch_coverage=1 00:08:11.942 --rc genhtml_function_coverage=1 00:08:11.942 --rc genhtml_legend=1 00:08:11.942 --rc geninfo_all_blocks=1 00:08:11.942 --rc geninfo_unexecuted_blocks=1 00:08:11.942 00:08:11.942 ' 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:11.942 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:11.942 --rc genhtml_branch_coverage=1 00:08:11.942 --rc genhtml_function_coverage=1 00:08:11.942 --rc genhtml_legend=1 00:08:11.942 --rc geninfo_all_blocks=1 00:08:11.942 --rc geninfo_unexecuted_blocks=1 00:08:11.942 00:08:11.942 ' 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:11.942 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # xtrace_disable 00:08:11.942 10:25:52 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.513 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:18.513 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # pci_devs=() 00:08:18.513 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # local -a pci_devs 00:08:18.513 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # pci_net_devs=() 00:08:18.513 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:08:18.513 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # pci_drivers=() 00:08:18.513 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # local -A pci_drivers 00:08:18.513 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # net_devs=() 00:08:18.513 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # local -ga net_devs 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # e810=() 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # local -ga e810 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # x722=() 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # local -ga x722 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # mlx=() 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # local -ga mlx 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:18.514 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:18.514 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:18.514 Found net devices under 0000:86:00.0: cvl_0_0 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:18.514 Found net devices under 0000:86:00.1: cvl_0_1 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # is_hw=yes 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # create_target_ns 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:18.514 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:08:18.515 10.0.0.1 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:08:18.515 10.0.0.2 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 1 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator0 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:08:18.515 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:08:18.515 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:18.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.304 ms 00:08:18.516 00:08:18.516 --- 10.0.0.1 ping statistics --- 00:08:18.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.516 rtt min/avg/max/mdev = 0.304/0.304/0.304/0.000 ms 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:08:18.516 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:18.516 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.149 ms 00:08:18.516 00:08:18.516 --- 10.0.0.2 ping statistics --- 00:08:18.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:18.516 rtt min/avg/max/mdev = 0.149/0.149/0.149/0.000 ms 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair++ )) 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # return 0 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator0 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator1 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # return 1 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev= 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@169 -- # return 0 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target1 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target1 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:08:18.516 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # return 1 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev= 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@169 -- # return 0 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=3089887 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 3089887 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3089887 ']' 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:18.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.517 [2024-11-20 10:25:58.537088] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:18.517 [2024-11-20 10:25:58.537134] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:18.517 [2024-11-20 10:25:58.615287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:18.517 [2024-11-20 10:25:58.658609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:18.517 [2024-11-20 10:25:58.658647] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:18.517 [2024-11-20 10:25:58.658654] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:18.517 [2024-11-20 10:25:58.658660] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:18.517 [2024-11-20 10:25:58.658664] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:18.517 [2024-11-20 10:25:58.660260] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:18.517 [2024-11-20 10:25:58.660391] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:18.517 [2024-11-20 10:25:58.660699] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:18.517 [2024-11-20 10:25:58.660700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.517 [2024-11-20 10:25:58.796715] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.517 Malloc0 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:18.517 [2024-11-20 10:25:58.852061] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3089983 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3089985 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:08:18.517 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:08:18.517 { 00:08:18.517 "params": { 00:08:18.517 "name": "Nvme$subsystem", 00:08:18.517 "trtype": "$TEST_TRANSPORT", 00:08:18.517 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.517 "adrfam": "ipv4", 00:08:18.517 "trsvcid": "$NVMF_PORT", 00:08:18.517 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.517 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.517 "hdgst": ${hdgst:-false}, 00:08:18.517 "ddgst": ${ddgst:-false} 00:08:18.518 }, 00:08:18.518 "method": "bdev_nvme_attach_controller" 00:08:18.518 } 00:08:18.518 EOF 00:08:18.518 )") 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3089987 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:08:18.518 { 00:08:18.518 "params": { 00:08:18.518 "name": "Nvme$subsystem", 00:08:18.518 "trtype": "$TEST_TRANSPORT", 00:08:18.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.518 "adrfam": "ipv4", 00:08:18.518 "trsvcid": "$NVMF_PORT", 00:08:18.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.518 "hdgst": ${hdgst:-false}, 00:08:18.518 "ddgst": ${ddgst:-false} 00:08:18.518 }, 00:08:18.518 "method": "bdev_nvme_attach_controller" 00:08:18.518 } 00:08:18.518 EOF 00:08:18.518 )") 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3089990 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:08:18.518 { 00:08:18.518 "params": { 00:08:18.518 "name": "Nvme$subsystem", 00:08:18.518 "trtype": "$TEST_TRANSPORT", 00:08:18.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.518 "adrfam": "ipv4", 00:08:18.518 "trsvcid": "$NVMF_PORT", 00:08:18.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.518 "hdgst": ${hdgst:-false}, 00:08:18.518 "ddgst": ${ddgst:-false} 00:08:18.518 }, 00:08:18.518 "method": "bdev_nvme_attach_controller" 00:08:18.518 } 00:08:18.518 EOF 00:08:18.518 )") 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:08:18.518 { 00:08:18.518 "params": { 00:08:18.518 "name": "Nvme$subsystem", 00:08:18.518 "trtype": "$TEST_TRANSPORT", 00:08:18.518 "traddr": "$NVMF_FIRST_TARGET_IP", 00:08:18.518 "adrfam": "ipv4", 00:08:18.518 "trsvcid": "$NVMF_PORT", 00:08:18.518 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:08:18.518 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:08:18.518 "hdgst": ${hdgst:-false}, 00:08:18.518 "ddgst": ${ddgst:-false} 00:08:18.518 }, 00:08:18.518 "method": "bdev_nvme_attach_controller" 00:08:18.518 } 00:08:18.518 EOF 00:08:18.518 )") 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3089983 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:08:18.518 "params": { 00:08:18.518 "name": "Nvme1", 00:08:18.518 "trtype": "tcp", 00:08:18.518 "traddr": "10.0.0.2", 00:08:18.518 "adrfam": "ipv4", 00:08:18.518 "trsvcid": "4420", 00:08:18.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:18.518 "hdgst": false, 00:08:18.518 "ddgst": false 00:08:18.518 }, 00:08:18.518 "method": "bdev_nvme_attach_controller" 00:08:18.518 }' 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:08:18.518 "params": { 00:08:18.518 "name": "Nvme1", 00:08:18.518 "trtype": "tcp", 00:08:18.518 "traddr": "10.0.0.2", 00:08:18.518 "adrfam": "ipv4", 00:08:18.518 "trsvcid": "4420", 00:08:18.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:18.518 "hdgst": false, 00:08:18.518 "ddgst": false 00:08:18.518 }, 00:08:18.518 "method": "bdev_nvme_attach_controller" 00:08:18.518 }' 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:08:18.518 "params": { 00:08:18.518 "name": "Nvme1", 00:08:18.518 "trtype": "tcp", 00:08:18.518 "traddr": "10.0.0.2", 00:08:18.518 "adrfam": "ipv4", 00:08:18.518 "trsvcid": "4420", 00:08:18.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:18.518 "hdgst": false, 00:08:18.518 "ddgst": false 00:08:18.518 }, 00:08:18.518 "method": "bdev_nvme_attach_controller" 00:08:18.518 }' 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:08:18.518 10:25:58 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:08:18.518 "params": { 00:08:18.518 "name": "Nvme1", 00:08:18.518 "trtype": "tcp", 00:08:18.518 "traddr": "10.0.0.2", 00:08:18.518 "adrfam": "ipv4", 00:08:18.518 "trsvcid": "4420", 00:08:18.518 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:08:18.518 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:08:18.518 "hdgst": false, 00:08:18.518 "ddgst": false 00:08:18.518 }, 00:08:18.518 "method": "bdev_nvme_attach_controller" 00:08:18.518 }' 00:08:18.518 [2024-11-20 10:25:58.901984] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:18.518 [2024-11-20 10:25:58.902032] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:08:18.519 [2024-11-20 10:25:58.905050] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:18.519 [2024-11-20 10:25:58.905090] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:08:18.519 [2024-11-20 10:25:58.906282] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:18.519 [2024-11-20 10:25:58.906327] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:08:18.519 [2024-11-20 10:25:58.906765] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:18.519 [2024-11-20 10:25:58.906805] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:08:18.519 [2024-11-20 10:25:59.081345] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.519 [2024-11-20 10:25:59.123771] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:08:18.519 [2024-11-20 10:25:59.181021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.519 [2024-11-20 10:25:59.223601] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:08:18.776 [2024-11-20 10:25:59.274539] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.776 [2024-11-20 10:25:59.330564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:08:18.776 [2024-11-20 10:25:59.334730] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:18.776 [2024-11-20 10:25:59.377267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:08:18.776 Running I/O for 1 seconds... 00:08:18.776 Running I/O for 1 seconds... 00:08:19.046 Running I/O for 1 seconds... 00:08:19.046 Running I/O for 1 seconds... 00:08:20.044 11966.00 IOPS, 46.74 MiB/s 00:08:20.044 Latency(us) 00:08:20.044 [2024-11-20T09:26:00.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.044 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:08:20.044 Nvme1n1 : 1.01 12026.24 46.98 0.00 0.00 10607.95 5523.75 14979.66 00:08:20.044 [2024-11-20T09:26:00.775Z] =================================================================================================================== 00:08:20.044 [2024-11-20T09:26:00.775Z] Total : 12026.24 46.98 0.00 0.00 10607.95 5523.75 14979.66 00:08:20.044 11044.00 IOPS, 43.14 MiB/s 00:08:20.044 Latency(us) 00:08:20.044 [2024-11-20T09:26:00.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.044 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:08:20.044 Nvme1n1 : 1.01 11112.91 43.41 0.00 0.00 11481.93 4556.31 21096.35 00:08:20.044 [2024-11-20T09:26:00.775Z] =================================================================================================================== 00:08:20.044 [2024-11-20T09:26:00.775Z] Total : 11112.91 43.41 0.00 0.00 11481.93 4556.31 21096.35 00:08:20.044 10090.00 IOPS, 39.41 MiB/s 00:08:20.044 Latency(us) 00:08:20.044 [2024-11-20T09:26:00.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.044 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:08:20.044 Nvme1n1 : 1.01 10163.56 39.70 0.00 0.00 12556.35 4462.69 22094.99 00:08:20.044 [2024-11-20T09:26:00.775Z] =================================================================================================================== 00:08:20.044 [2024-11-20T09:26:00.775Z] Total : 10163.56 39.70 0.00 0.00 12556.35 4462.69 22094.99 00:08:20.044 253112.00 IOPS, 988.72 MiB/s 00:08:20.044 Latency(us) 00:08:20.044 [2024-11-20T09:26:00.775Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:20.044 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:08:20.044 Nvme1n1 : 1.00 252729.28 987.22 0.00 0.00 503.75 223.33 1490.16 00:08:20.044 [2024-11-20T09:26:00.775Z] =================================================================================================================== 00:08:20.044 [2024-11-20T09:26:00.775Z] Total : 252729.28 987.22 0.00 0.00 503.75 223.33 1490.16 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3089985 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3089987 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3089990 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:08:20.044 rmmod nvme_tcp 00:08:20.044 rmmod nvme_fabrics 00:08:20.044 rmmod nvme_keyring 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 3089887 ']' 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 3089887 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3089887 ']' 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3089887 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:20.044 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3089887 00:08:20.304 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:20.304 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:20.304 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3089887' 00:08:20.304 killing process with pid 3089887 00:08:20.304 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3089887 00:08:20.304 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3089887 00:08:20.304 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:08:20.304 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:08:20.304 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@264 -- # local dev 00:08:20.304 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@267 -- # remove_target_ns 00:08:20.304 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:20.304 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:20.304 10:26:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@268 -- # delete_main_bridge 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # return 0 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/setup.sh@284 -- # iptr 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # iptables-save 00:08:22.841 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # iptables-restore 00:08:22.842 00:08:22.842 real 0m10.875s 00:08:22.842 user 0m15.840s 00:08:22.842 sys 0m6.343s 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:08:22.842 ************************************ 00:08:22.842 END TEST nvmf_bdev_io_wait 00:08:22.842 ************************************ 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@25 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:22.842 ************************************ 00:08:22.842 START TEST nvmf_queue_depth 00:08:22.842 ************************************ 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:08:22.842 * Looking for test storage... 00:08:22.842 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:22.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.842 --rc genhtml_branch_coverage=1 00:08:22.842 --rc genhtml_function_coverage=1 00:08:22.842 --rc genhtml_legend=1 00:08:22.842 --rc geninfo_all_blocks=1 00:08:22.842 --rc geninfo_unexecuted_blocks=1 00:08:22.842 00:08:22.842 ' 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:22.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.842 --rc genhtml_branch_coverage=1 00:08:22.842 --rc genhtml_function_coverage=1 00:08:22.842 --rc genhtml_legend=1 00:08:22.842 --rc geninfo_all_blocks=1 00:08:22.842 --rc geninfo_unexecuted_blocks=1 00:08:22.842 00:08:22.842 ' 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:22.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.842 --rc genhtml_branch_coverage=1 00:08:22.842 --rc genhtml_function_coverage=1 00:08:22.842 --rc genhtml_legend=1 00:08:22.842 --rc geninfo_all_blocks=1 00:08:22.842 --rc geninfo_unexecuted_blocks=1 00:08:22.842 00:08:22.842 ' 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:22.842 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:22.842 --rc genhtml_branch_coverage=1 00:08:22.842 --rc genhtml_function_coverage=1 00:08:22.842 --rc genhtml_legend=1 00:08:22.842 --rc geninfo_all_blocks=1 00:08:22.842 --rc geninfo_unexecuted_blocks=1 00:08:22.842 00:08:22.842 ' 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.842 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:22.843 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # xtrace_disable 00:08:22.843 10:26:03 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@131 -- # pci_devs=() 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@131 -- # local -a pci_devs 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@132 -- # pci_net_devs=() 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@133 -- # pci_drivers=() 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@133 -- # local -A pci_drivers 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@135 -- # net_devs=() 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@135 -- # local -ga net_devs 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@136 -- # e810=() 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@136 -- # local -ga e810 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@137 -- # x722=() 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@137 -- # local -ga x722 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@138 -- # mlx=() 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@138 -- # local -ga mlx 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:29.414 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:29.414 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:08:29.414 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:29.415 Found net devices under 0000:86:00.0: cvl_0_0 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:29.415 Found net devices under 0000:86:00.1: cvl_0_1 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@262 -- # is_hw=yes 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@257 -- # create_target_ns 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:08:29.415 10.0.0.1 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:08:29.415 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:08:29.416 10.0.0.2 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 1 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator0 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:08:29.416 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:29.416 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.322 ms 00:08:29.416 00:08:29.416 --- 10.0.0.1 ping statistics --- 00:08:29.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.416 rtt min/avg/max/mdev = 0.322/0.322/0.322/0.000 ms 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:08:29.416 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:29.416 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.126 ms 00:08:29.416 00:08:29.416 --- 10.0.0.2 ping statistics --- 00:08:29.416 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:29.416 rtt min/avg/max/mdev = 0.126/0.126/0.126/0.000 ms 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair++ )) 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # return 0 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator0 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:29.416 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator1 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # return 1 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev= 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@169 -- # return 0 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target1 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target1 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@109 -- # return 1 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev= 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@169 -- # return 0 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=3093819 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 3093819 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3093819 ']' 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:29.417 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.417 [2024-11-20 10:26:09.553884] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:29.417 [2024-11-20 10:26:09.553937] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:29.417 [2024-11-20 10:26:09.636104] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.417 [2024-11-20 10:26:09.675719] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:29.417 [2024-11-20 10:26:09.675755] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:29.417 [2024-11-20 10:26:09.675762] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:29.417 [2024-11-20 10:26:09.675767] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:29.417 [2024-11-20 10:26:09.675772] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:29.417 [2024-11-20 10:26:09.676365] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.417 [2024-11-20 10:26:09.823120] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.417 Malloc0 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.417 [2024-11-20 10:26:09.873447] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.417 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3094051 00:08:29.418 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:08:29.418 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:29.418 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3094051 /var/tmp/bdevperf.sock 00:08:29.418 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3094051 ']' 00:08:29.418 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:29.418 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:29.418 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:29.418 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:29.418 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:29.418 10:26:09 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.418 [2024-11-20 10:26:09.923199] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:29.418 [2024-11-20 10:26:09.923245] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3094051 ] 00:08:29.418 [2024-11-20 10:26:09.996302] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:29.418 [2024-11-20 10:26:10.042548] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:29.418 10:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:29.418 10:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:08:29.418 10:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:08:29.418 10:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:29.418 10:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:29.676 NVMe0n1 00:08:29.676 10:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:29.676 10:26:10 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:29.676 Running I/O for 10 seconds... 00:08:31.988 11880.00 IOPS, 46.41 MiB/s [2024-11-20T09:26:13.656Z] 12156.50 IOPS, 47.49 MiB/s [2024-11-20T09:26:14.592Z] 12265.33 IOPS, 47.91 MiB/s [2024-11-20T09:26:15.657Z] 12292.75 IOPS, 48.02 MiB/s [2024-11-20T09:26:16.593Z] 12430.60 IOPS, 48.56 MiB/s [2024-11-20T09:26:17.529Z] 12440.00 IOPS, 48.59 MiB/s [2024-11-20T09:26:18.464Z] 12431.86 IOPS, 48.56 MiB/s [2024-11-20T09:26:19.846Z] 12486.50 IOPS, 48.78 MiB/s [2024-11-20T09:26:20.414Z] 12505.78 IOPS, 48.85 MiB/s [2024-11-20T09:26:20.673Z] 12527.20 IOPS, 48.93 MiB/s 00:08:39.942 Latency(us) 00:08:39.942 [2024-11-20T09:26:20.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.942 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:08:39.942 Verification LBA range: start 0x0 length 0x4000 00:08:39.942 NVMe0n1 : 10.06 12547.00 49.01 0.00 0.00 81316.95 17725.93 52428.80 00:08:39.942 [2024-11-20T09:26:20.673Z] =================================================================================================================== 00:08:39.942 [2024-11-20T09:26:20.673Z] Total : 12547.00 49.01 0.00 0.00 81316.95 17725.93 52428.80 00:08:39.942 { 00:08:39.942 "results": [ 00:08:39.942 { 00:08:39.942 "job": "NVMe0n1", 00:08:39.942 "core_mask": "0x1", 00:08:39.942 "workload": "verify", 00:08:39.942 "status": "finished", 00:08:39.942 "verify_range": { 00:08:39.942 "start": 0, 00:08:39.942 "length": 16384 00:08:39.942 }, 00:08:39.942 "queue_depth": 1024, 00:08:39.942 "io_size": 4096, 00:08:39.942 "runtime": 10.05794, 00:08:39.942 "iops": 12547.00266655001, 00:08:39.942 "mibps": 49.011729166210976, 00:08:39.942 "io_failed": 0, 00:08:39.942 "io_timeout": 0, 00:08:39.942 "avg_latency_us": 81316.95085118996, 00:08:39.942 "min_latency_us": 17725.92761904762, 00:08:39.942 "max_latency_us": 52428.8 00:08:39.942 } 00:08:39.942 ], 00:08:39.942 "core_count": 1 00:08:39.942 } 00:08:39.942 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3094051 00:08:39.942 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3094051 ']' 00:08:39.942 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3094051 00:08:39.942 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:39.942 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:39.942 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3094051 00:08:39.942 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:39.942 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:39.942 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3094051' 00:08:39.942 killing process with pid 3094051 00:08:39.942 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3094051 00:08:39.942 Received shutdown signal, test time was about 10.000000 seconds 00:08:39.942 00:08:39.942 Latency(us) 00:08:39.942 [2024-11-20T09:26:20.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:39.942 [2024-11-20T09:26:20.673Z] =================================================================================================================== 00:08:39.942 [2024-11-20T09:26:20.673Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:39.942 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3094051 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:08:40.202 rmmod nvme_tcp 00:08:40.202 rmmod nvme_fabrics 00:08:40.202 rmmod nvme_keyring 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 3093819 ']' 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 3093819 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3093819 ']' 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3093819 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3093819 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3093819' 00:08:40.202 killing process with pid 3093819 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3093819 00:08:40.202 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3093819 00:08:40.461 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:08:40.461 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:08:40.461 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@264 -- # local dev 00:08:40.461 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@267 -- # remove_target_ns 00:08:40.461 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:40.461 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:40.461 10:26:20 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@268 -- # delete_main_bridge 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@130 -- # return 0 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/setup.sh@284 -- # iptr 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@542 -- # iptables-save 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@542 -- # iptables-restore 00:08:42.369 00:08:42.369 real 0m19.950s 00:08:42.369 user 0m23.211s 00:08:42.369 sys 0m6.160s 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:08:42.369 ************************************ 00:08:42.369 END TEST nvmf_queue_depth 00:08:42.369 ************************************ 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:42.369 10:26:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:42.630 ************************************ 00:08:42.630 START TEST nvmf_nmic 00:08:42.630 ************************************ 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:08:42.630 * Looking for test storage... 00:08:42.630 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:42.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.630 --rc genhtml_branch_coverage=1 00:08:42.630 --rc genhtml_function_coverage=1 00:08:42.630 --rc genhtml_legend=1 00:08:42.630 --rc geninfo_all_blocks=1 00:08:42.630 --rc geninfo_unexecuted_blocks=1 00:08:42.630 00:08:42.630 ' 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:42.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.630 --rc genhtml_branch_coverage=1 00:08:42.630 --rc genhtml_function_coverage=1 00:08:42.630 --rc genhtml_legend=1 00:08:42.630 --rc geninfo_all_blocks=1 00:08:42.630 --rc geninfo_unexecuted_blocks=1 00:08:42.630 00:08:42.630 ' 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:42.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.630 --rc genhtml_branch_coverage=1 00:08:42.630 --rc genhtml_function_coverage=1 00:08:42.630 --rc genhtml_legend=1 00:08:42.630 --rc geninfo_all_blocks=1 00:08:42.630 --rc geninfo_unexecuted_blocks=1 00:08:42.630 00:08:42.630 ' 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:42.630 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:42.630 --rc genhtml_branch_coverage=1 00:08:42.630 --rc genhtml_function_coverage=1 00:08:42.630 --rc genhtml_legend=1 00:08:42.630 --rc geninfo_all_blocks=1 00:08:42.630 --rc geninfo_unexecuted_blocks=1 00:08:42.630 00:08:42.630 ' 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:42.630 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:42.631 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # xtrace_disable 00:08:42.631 10:26:23 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@131 -- # pci_devs=() 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@131 -- # local -a pci_devs 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@132 -- # pci_net_devs=() 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@133 -- # pci_drivers=() 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@133 -- # local -A pci_drivers 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@135 -- # net_devs=() 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@135 -- # local -ga net_devs 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@136 -- # e810=() 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@136 -- # local -ga e810 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@137 -- # x722=() 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@137 -- # local -ga x722 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@138 -- # mlx=() 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@138 -- # local -ga mlx 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:49.203 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:49.203 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:49.203 Found net devices under 0000:86:00.0: cvl_0_0 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:49.203 Found net devices under 0000:86:00.1: cvl_0_1 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@262 -- # is_hw=yes 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@257 -- # create_target_ns 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:08:49.203 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:08:49.204 10.0.0.1 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:08:49.204 10.0.0.2 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 1 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator0 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:08:49.204 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:49.204 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:08:49.204 00:08:49.204 --- 10.0.0.1 ping statistics --- 00:08:49.204 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.204 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:49.204 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:08:49.205 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:49.205 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.248 ms 00:08:49.205 00:08:49.205 --- 10.0.0.2 ping statistics --- 00:08:49.205 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:49.205 rtt min/avg/max/mdev = 0.248/0.248/0.248/0.000 ms 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair++ )) 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # return 0 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator0 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator1 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # return 1 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev= 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@169 -- # return 0 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target1 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target1 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@109 -- # return 1 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@168 -- # dev= 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@169 -- # return 0 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=3099442 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 3099442 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3099442 ']' 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:49.205 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:49.206 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:49.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:49.206 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:49.206 10:26:29 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.206 [2024-11-20 10:26:29.527437] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:08:49.206 [2024-11-20 10:26:29.527482] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:49.206 [2024-11-20 10:26:29.608462] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:49.206 [2024-11-20 10:26:29.651609] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:49.206 [2024-11-20 10:26:29.651648] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:49.206 [2024-11-20 10:26:29.651655] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:49.206 [2024-11-20 10:26:29.651661] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:49.206 [2024-11-20 10:26:29.651667] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:49.206 [2024-11-20 10:26:29.653112] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:08:49.206 [2024-11-20 10:26:29.653234] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:08:49.206 [2024-11-20 10:26:29.653129] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:08:49.206 [2024-11-20 10:26:29.653236] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.772 [2024-11-20 10:26:30.428681] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.772 Malloc0 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:49.772 [2024-11-20 10:26:30.490739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:08:49.772 test case1: single bdev can't be used in multiple subsystems 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:49.772 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.030 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.030 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:08:50.030 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.030 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.030 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.030 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:08:50.030 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:08:50.030 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.030 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.030 [2024-11-20 10:26:30.518638] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:08:50.030 [2024-11-20 10:26:30.518659] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:08:50.030 [2024-11-20 10:26:30.518666] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:08:50.030 request: 00:08:50.030 { 00:08:50.030 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:08:50.030 "namespace": { 00:08:50.030 "bdev_name": "Malloc0", 00:08:50.030 "no_auto_visible": false 00:08:50.030 }, 00:08:50.030 "method": "nvmf_subsystem_add_ns", 00:08:50.030 "req_id": 1 00:08:50.030 } 00:08:50.030 Got JSON-RPC error response 00:08:50.030 response: 00:08:50.030 { 00:08:50.030 "code": -32602, 00:08:50.030 "message": "Invalid parameters" 00:08:50.030 } 00:08:50.030 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:08:50.030 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:08:50.030 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:08:50.030 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:08:50.030 Adding namespace failed - expected result. 00:08:50.030 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:08:50.030 test case2: host connect to nvmf target in multiple paths 00:08:50.030 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:08:50.030 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:50.030 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:50.030 [2024-11-20 10:26:30.530769] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:08:50.030 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:50.030 10:26:30 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:08:51.402 10:26:31 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:08:52.334 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:08:52.334 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:08:52.334 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:08:52.334 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:08:52.334 10:26:32 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:08:54.230 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:08:54.230 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:08:54.230 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:08:54.230 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:08:54.230 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:08:54.230 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:08:54.230 10:26:34 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:08:54.230 [global] 00:08:54.230 thread=1 00:08:54.230 invalidate=1 00:08:54.230 rw=write 00:08:54.230 time_based=1 00:08:54.230 runtime=1 00:08:54.230 ioengine=libaio 00:08:54.230 direct=1 00:08:54.230 bs=4096 00:08:54.230 iodepth=1 00:08:54.230 norandommap=0 00:08:54.230 numjobs=1 00:08:54.230 00:08:54.230 verify_dump=1 00:08:54.230 verify_backlog=512 00:08:54.230 verify_state_save=0 00:08:54.230 do_verify=1 00:08:54.230 verify=crc32c-intel 00:08:54.230 [job0] 00:08:54.230 filename=/dev/nvme0n1 00:08:54.230 Could not set queue depth (nvme0n1) 00:08:54.509 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:08:54.509 fio-3.35 00:08:54.509 Starting 1 thread 00:08:55.886 00:08:55.886 job0: (groupid=0, jobs=1): err= 0: pid=3100530: Wed Nov 20 10:26:36 2024 00:08:55.886 read: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec) 00:08:55.886 slat (nsec): min=6966, max=39242, avg=8014.75, stdev=1504.05 00:08:55.886 clat (usec): min=149, max=273, avg=190.75, stdev=24.04 00:08:55.886 lat (usec): min=162, max=281, avg=198.77, stdev=24.22 00:08:55.886 clat percentiles (usec): 00:08:55.886 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 169], 00:08:55.886 | 30.00th=[ 172], 40.00th=[ 176], 50.00th=[ 180], 60.00th=[ 200], 00:08:55.886 | 70.00th=[ 206], 80.00th=[ 212], 90.00th=[ 221], 95.00th=[ 239], 00:08:55.886 | 99.00th=[ 253], 99.50th=[ 260], 99.90th=[ 269], 99.95th=[ 273], 00:08:55.886 | 99.99th=[ 273] 00:08:55.886 write: IOPS=2662, BW=10.4MiB/s (10.9MB/s)(10.4MiB/1001msec); 0 zone resets 00:08:55.886 slat (usec): min=10, max=23862, avg=20.75, stdev=462.02 00:08:55.886 clat (usec): min=111, max=569, avg=157.17, stdev=25.44 00:08:55.886 lat (usec): min=122, max=24163, avg=177.92, stdev=465.50 00:08:55.886 clat percentiles (usec): 00:08:55.886 | 1.00th=[ 120], 5.00th=[ 124], 10.00th=[ 127], 20.00th=[ 137], 00:08:55.886 | 30.00th=[ 145], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 159], 00:08:55.886 | 70.00th=[ 163], 80.00th=[ 169], 90.00th=[ 198], 95.00th=[ 206], 00:08:55.886 | 99.00th=[ 223], 99.50th=[ 243], 99.90th=[ 318], 99.95th=[ 367], 00:08:55.886 | 99.99th=[ 570] 00:08:55.886 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:08:55.886 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:08:55.886 lat (usec) : 250=99.12%, 500=0.86%, 750=0.02% 00:08:55.886 cpu : usr=4.60%, sys=7.90%, ctx=5228, majf=0, minf=1 00:08:55.886 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:08:55.886 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.886 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:55.886 issued rwts: total=2560,2665,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:55.886 latency : target=0, window=0, percentile=100.00%, depth=1 00:08:55.886 00:08:55.886 Run status group 0 (all jobs): 00:08:55.886 READ: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:08:55.886 WRITE: bw=10.4MiB/s (10.9MB/s), 10.4MiB/s-10.4MiB/s (10.9MB/s-10.9MB/s), io=10.4MiB (10.9MB), run=1001-1001msec 00:08:55.886 00:08:55.886 Disk stats (read/write): 00:08:55.886 nvme0n1: ios=2172/2560, merge=0/0, ticks=1376/382, in_queue=1758, util=98.40% 00:08:55.886 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:08:55.886 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:08:55.886 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:08:55.886 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:08:55.886 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.886 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:08:55.886 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:08:55.886 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:08:55.886 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:08:55.886 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:08:55.886 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:08:55.886 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:08:55.886 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:08:55.886 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:08:55.886 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:08:55.886 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:08:55.886 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:08:55.886 rmmod nvme_tcp 00:08:55.886 rmmod nvme_fabrics 00:08:55.886 rmmod nvme_keyring 00:08:56.145 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:08:56.145 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:08:56.145 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:08:56.145 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 3099442 ']' 00:08:56.145 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 3099442 00:08:56.145 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3099442 ']' 00:08:56.145 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3099442 00:08:56.145 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:08:56.145 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:56.145 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3099442 00:08:56.145 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:56.145 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:56.145 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3099442' 00:08:56.145 killing process with pid 3099442 00:08:56.145 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3099442 00:08:56.145 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3099442 00:08:56.404 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:08:56.404 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:08:56.404 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@264 -- # local dev 00:08:56.404 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@267 -- # remove_target_ns 00:08:56.405 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:56.405 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:56.405 10:26:36 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:58.310 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@268 -- # delete_main_bridge 00:08:58.310 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:08:58.310 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@130 -- # return 0 00:08:58.310 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:58.310 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:08:58.310 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:58.310 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:08:58.310 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:08:58.310 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:58.310 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:08:58.310 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:08:58.310 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:08:58.310 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:08:58.310 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:08:58.310 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:08:58.310 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:08:58.310 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:08:58.310 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:08:58.310 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:08:58.311 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:08:58.311 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:08:58.311 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:08:58.311 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/setup.sh@284 -- # iptr 00:08:58.311 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@542 -- # iptables-save 00:08:58.311 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:08:58.311 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@542 -- # iptables-restore 00:08:58.311 00:08:58.311 real 0m15.831s 00:08:58.311 user 0m36.495s 00:08:58.311 sys 0m5.499s 00:08:58.311 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:58.311 10:26:38 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:08:58.311 ************************************ 00:08:58.311 END TEST nvmf_nmic 00:08:58.311 ************************************ 00:08:58.311 10:26:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:58.311 10:26:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:08:58.311 10:26:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:58.311 10:26:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:58.311 ************************************ 00:08:58.311 START TEST nvmf_fio_target 00:08:58.311 ************************************ 00:08:58.311 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:08:58.570 * Looking for test storage... 00:08:58.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:08:58.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.570 --rc genhtml_branch_coverage=1 00:08:58.570 --rc genhtml_function_coverage=1 00:08:58.570 --rc genhtml_legend=1 00:08:58.570 --rc geninfo_all_blocks=1 00:08:58.570 --rc geninfo_unexecuted_blocks=1 00:08:58.570 00:08:58.570 ' 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:08:58.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.570 --rc genhtml_branch_coverage=1 00:08:58.570 --rc genhtml_function_coverage=1 00:08:58.570 --rc genhtml_legend=1 00:08:58.570 --rc geninfo_all_blocks=1 00:08:58.570 --rc geninfo_unexecuted_blocks=1 00:08:58.570 00:08:58.570 ' 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:08:58.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.570 --rc genhtml_branch_coverage=1 00:08:58.570 --rc genhtml_function_coverage=1 00:08:58.570 --rc genhtml_legend=1 00:08:58.570 --rc geninfo_all_blocks=1 00:08:58.570 --rc geninfo_unexecuted_blocks=1 00:08:58.570 00:08:58.570 ' 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:08:58.570 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:58.570 --rc genhtml_branch_coverage=1 00:08:58.570 --rc genhtml_function_coverage=1 00:08:58.570 --rc genhtml_legend=1 00:08:58.570 --rc geninfo_all_blocks=1 00:08:58.570 --rc geninfo_unexecuted_blocks=1 00:08:58.570 00:08:58.570 ' 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:08:58.570 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:08:58.571 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # xtrace_disable 00:08:58.571 10:26:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@131 -- # pci_devs=() 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@135 -- # net_devs=() 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@136 -- # e810=() 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@136 -- # local -ga e810 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@137 -- # x722=() 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@137 -- # local -ga x722 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@138 -- # mlx=() 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@138 -- # local -ga mlx 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:05.142 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:05.142 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:05.142 Found net devices under 0000:86:00.0: cvl_0_0 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:05.142 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:05.142 Found net devices under 0000:86:00.1: cvl_0_1 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@262 -- # is_hw=yes 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@257 -- # create_target_ns 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:09:05.143 10:26:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:09:05.143 10.0.0.1 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:09:05.143 10.0.0.2 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:09:05.143 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:09:05.144 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:05.144 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.419 ms 00:09:05.144 00:09:05.144 --- 10.0.0.1 ping statistics --- 00:09:05.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.144 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:09:05.144 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:05.144 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.151 ms 00:09:05.144 00:09:05.144 --- 10.0.0.2 ping statistics --- 00:09:05.144 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:05.144 rtt min/avg/max/mdev = 0.151/0.151/0.151/0.000 ms 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # return 0 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:05.144 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # return 1 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev= 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@169 -- # return 0 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target1 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@109 -- # return 1 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev= 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@169 -- # return 0 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=3104319 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 3104319 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3104319 ']' 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:05.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:05.145 [2024-11-20 10:26:45.412138] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:09:05.145 [2024-11-20 10:26:45.412192] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:05.145 [2024-11-20 10:26:45.492983] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:05.145 [2024-11-20 10:26:45.533784] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:05.145 [2024-11-20 10:26:45.533823] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:05.145 [2024-11-20 10:26:45.533830] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:05.145 [2024-11-20 10:26:45.533837] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:05.145 [2024-11-20 10:26:45.533843] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:05.145 [2024-11-20 10:26:45.535423] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:05.145 [2024-11-20 10:26:45.535539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:05.145 [2024-11-20 10:26:45.535626] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:05.145 [2024-11-20 10:26:45.535627] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:09:05.145 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:05.146 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:05.146 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:05.146 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:09:05.146 [2024-11-20 10:26:45.844469] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:05.405 10:26:45 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:05.405 10:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:09:05.405 10:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:05.664 10:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:09:05.664 10:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:05.924 10:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:09:05.924 10:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:06.183 10:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:09:06.183 10:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:09:06.442 10:26:46 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:06.442 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:09:06.442 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:06.701 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:09:06.701 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:09:06.961 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:09:06.961 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:09:07.220 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:09:07.479 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:07.479 10:26:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:07.479 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:09:07.479 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:09:07.737 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:07.996 [2024-11-20 10:26:48.550335] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:07.996 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:09:08.255 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:09:08.514 10:26:48 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:09:09.451 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:09:09.451 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:09:09.451 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:09:09.451 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:09:09.451 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:09:09.451 10:26:50 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:09:11.985 10:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:09:11.985 10:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:09:11.985 10:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:09:11.985 10:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:09:11.985 10:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:09:11.985 10:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:09:11.985 10:26:52 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:09:11.985 [global] 00:09:11.985 thread=1 00:09:11.985 invalidate=1 00:09:11.985 rw=write 00:09:11.985 time_based=1 00:09:11.985 runtime=1 00:09:11.985 ioengine=libaio 00:09:11.985 direct=1 00:09:11.985 bs=4096 00:09:11.985 iodepth=1 00:09:11.985 norandommap=0 00:09:11.985 numjobs=1 00:09:11.985 00:09:11.985 verify_dump=1 00:09:11.986 verify_backlog=512 00:09:11.986 verify_state_save=0 00:09:11.986 do_verify=1 00:09:11.986 verify=crc32c-intel 00:09:11.986 [job0] 00:09:11.986 filename=/dev/nvme0n1 00:09:11.986 [job1] 00:09:11.986 filename=/dev/nvme0n2 00:09:11.986 [job2] 00:09:11.986 filename=/dev/nvme0n3 00:09:11.986 [job3] 00:09:11.986 filename=/dev/nvme0n4 00:09:11.986 Could not set queue depth (nvme0n1) 00:09:11.986 Could not set queue depth (nvme0n2) 00:09:11.986 Could not set queue depth (nvme0n3) 00:09:11.986 Could not set queue depth (nvme0n4) 00:09:11.986 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.986 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.986 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.986 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:11.986 fio-3.35 00:09:11.986 Starting 4 threads 00:09:13.363 00:09:13.363 job0: (groupid=0, jobs=1): err= 0: pid=3105667: Wed Nov 20 10:26:53 2024 00:09:13.363 read: IOPS=1857, BW=7429KiB/s (7607kB/s)(7436KiB/1001msec) 00:09:13.363 slat (nsec): min=6483, max=23748, avg=7628.48, stdev=1255.84 00:09:13.363 clat (usec): min=178, max=652, avg=305.44, stdev=65.48 00:09:13.363 lat (usec): min=185, max=660, avg=313.07, stdev=65.43 00:09:13.363 clat percentiles (usec): 00:09:13.363 | 1.00th=[ 196], 5.00th=[ 210], 10.00th=[ 223], 20.00th=[ 245], 00:09:13.363 | 30.00th=[ 265], 40.00th=[ 285], 50.00th=[ 302], 60.00th=[ 318], 00:09:13.363 | 70.00th=[ 334], 80.00th=[ 359], 90.00th=[ 400], 95.00th=[ 424], 00:09:13.363 | 99.00th=[ 461], 99.50th=[ 474], 99.90th=[ 519], 99.95th=[ 652], 00:09:13.363 | 99.99th=[ 652] 00:09:13.363 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:13.363 slat (nsec): min=9496, max=59446, avg=11673.06, stdev=3203.34 00:09:13.363 clat (usec): min=123, max=484, avg=187.70, stdev=33.26 00:09:13.363 lat (usec): min=133, max=507, avg=199.37, stdev=33.38 00:09:13.363 clat percentiles (usec): 00:09:13.363 | 1.00th=[ 133], 5.00th=[ 141], 10.00th=[ 147], 20.00th=[ 157], 00:09:13.363 | 30.00th=[ 165], 40.00th=[ 176], 50.00th=[ 186], 60.00th=[ 198], 00:09:13.363 | 70.00th=[ 208], 80.00th=[ 217], 90.00th=[ 229], 95.00th=[ 239], 00:09:13.363 | 99.00th=[ 262], 99.50th=[ 306], 99.90th=[ 347], 99.95th=[ 433], 00:09:13.363 | 99.99th=[ 486] 00:09:13.363 bw ( KiB/s): min= 8192, max= 8192, per=25.83%, avg=8192.00, stdev= 0.00, samples=1 00:09:13.363 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:13.363 lat (usec) : 250=62.50%, 500=37.39%, 750=0.10% 00:09:13.363 cpu : usr=2.30%, sys=3.80%, ctx=3909, majf=0, minf=1 00:09:13.363 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.363 issued rwts: total=1859,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.363 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.363 job1: (groupid=0, jobs=1): err= 0: pid=3105668: Wed Nov 20 10:26:53 2024 00:09:13.363 read: IOPS=1493, BW=5975KiB/s (6118kB/s)(6172KiB/1033msec) 00:09:13.363 slat (nsec): min=7089, max=26525, avg=9524.71, stdev=1714.38 00:09:13.363 clat (usec): min=190, max=40527, avg=402.45, stdev=1771.54 00:09:13.363 lat (usec): min=199, max=40554, avg=411.97, stdev=1771.78 00:09:13.363 clat percentiles (usec): 00:09:13.363 | 1.00th=[ 212], 5.00th=[ 233], 10.00th=[ 241], 20.00th=[ 251], 00:09:13.363 | 30.00th=[ 273], 40.00th=[ 302], 50.00th=[ 322], 60.00th=[ 338], 00:09:13.363 | 70.00th=[ 359], 80.00th=[ 392], 90.00th=[ 416], 95.00th=[ 441], 00:09:13.363 | 99.00th=[ 490], 99.50th=[ 506], 99.90th=[40633], 99.95th=[40633], 00:09:13.363 | 99.99th=[40633] 00:09:13.363 write: IOPS=1982, BW=7930KiB/s (8121kB/s)(8192KiB/1033msec); 0 zone resets 00:09:13.363 slat (nsec): min=9826, max=50709, avg=13259.83, stdev=3900.50 00:09:13.363 clat (usec): min=110, max=370, avg=175.00, stdev=32.80 00:09:13.363 lat (usec): min=134, max=421, avg=188.26, stdev=33.33 00:09:13.363 clat percentiles (usec): 00:09:13.363 | 1.00th=[ 127], 5.00th=[ 137], 10.00th=[ 141], 20.00th=[ 147], 00:09:13.363 | 30.00th=[ 153], 40.00th=[ 159], 50.00th=[ 165], 60.00th=[ 176], 00:09:13.363 | 70.00th=[ 196], 80.00th=[ 208], 90.00th=[ 221], 95.00th=[ 229], 00:09:13.363 | 99.00th=[ 255], 99.50th=[ 285], 99.90th=[ 343], 99.95th=[ 347], 00:09:13.363 | 99.99th=[ 371] 00:09:13.363 bw ( KiB/s): min= 8192, max= 8192, per=25.83%, avg=8192.00, stdev= 0.00, samples=2 00:09:13.363 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:09:13.363 lat (usec) : 250=64.58%, 500=35.14%, 750=0.17% 00:09:13.363 lat (msec) : 4=0.03%, 50=0.08% 00:09:13.363 cpu : usr=3.20%, sys=4.84%, ctx=3593, majf=0, minf=1 00:09:13.363 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.363 issued rwts: total=1543,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.363 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.363 job2: (groupid=0, jobs=1): err= 0: pid=3105670: Wed Nov 20 10:26:53 2024 00:09:13.363 read: IOPS=1671, BW=6686KiB/s (6847kB/s)(6860KiB/1026msec) 00:09:13.363 slat (nsec): min=7156, max=30124, avg=9507.78, stdev=1657.90 00:09:13.363 clat (usec): min=196, max=41220, avg=363.49, stdev=2199.97 00:09:13.363 lat (usec): min=205, max=41250, avg=373.00, stdev=2200.78 00:09:13.363 clat percentiles (usec): 00:09:13.363 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 227], 20.00th=[ 233], 00:09:13.363 | 30.00th=[ 237], 40.00th=[ 241], 50.00th=[ 245], 60.00th=[ 247], 00:09:13.363 | 70.00th=[ 251], 80.00th=[ 255], 90.00th=[ 265], 95.00th=[ 269], 00:09:13.363 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[41157], 99.95th=[41157], 00:09:13.363 | 99.99th=[41157] 00:09:13.363 write: IOPS=1996, BW=7984KiB/s (8176kB/s)(8192KiB/1026msec); 0 zone resets 00:09:13.363 slat (nsec): min=9438, max=38781, avg=12406.78, stdev=2566.87 00:09:13.363 clat (usec): min=131, max=294, avg=170.64, stdev=20.12 00:09:13.363 lat (usec): min=146, max=333, avg=183.04, stdev=20.33 00:09:13.363 clat percentiles (usec): 00:09:13.363 | 1.00th=[ 143], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 157], 00:09:13.363 | 30.00th=[ 161], 40.00th=[ 163], 50.00th=[ 167], 60.00th=[ 172], 00:09:13.363 | 70.00th=[ 176], 80.00th=[ 182], 90.00th=[ 192], 95.00th=[ 210], 00:09:13.363 | 99.00th=[ 249], 99.50th=[ 258], 99.90th=[ 281], 99.95th=[ 285], 00:09:13.363 | 99.99th=[ 293] 00:09:13.363 bw ( KiB/s): min= 6304, max=10080, per=25.83%, avg=8192.00, stdev=2670.04, samples=2 00:09:13.363 iops : min= 1576, max= 2520, avg=2048.00, stdev=667.51, samples=2 00:09:13.363 lat (usec) : 250=84.93%, 500=14.91%, 750=0.03% 00:09:13.363 lat (msec) : 50=0.13% 00:09:13.363 cpu : usr=2.44%, sys=3.90%, ctx=3764, majf=0, minf=2 00:09:13.363 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.363 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.363 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.363 issued rwts: total=1715,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.363 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.363 job3: (groupid=0, jobs=1): err= 0: pid=3105671: Wed Nov 20 10:26:53 2024 00:09:13.363 read: IOPS=1671, BW=6685KiB/s (6846kB/s)(6692KiB/1001msec) 00:09:13.363 slat (nsec): min=8648, max=48680, avg=10438.36, stdev=2912.39 00:09:13.363 clat (usec): min=197, max=590, avg=324.22, stdev=75.34 00:09:13.363 lat (usec): min=206, max=600, avg=334.66, stdev=75.16 00:09:13.363 clat percentiles (usec): 00:09:13.363 | 1.00th=[ 212], 5.00th=[ 227], 10.00th=[ 237], 20.00th=[ 251], 00:09:13.363 | 30.00th=[ 269], 40.00th=[ 293], 50.00th=[ 318], 60.00th=[ 334], 00:09:13.363 | 70.00th=[ 355], 80.00th=[ 396], 90.00th=[ 437], 95.00th=[ 465], 00:09:13.363 | 99.00th=[ 519], 99.50th=[ 537], 99.90th=[ 553], 99.95th=[ 594], 00:09:13.363 | 99.99th=[ 594] 00:09:13.363 write: IOPS=2045, BW=8184KiB/s (8380kB/s)(8192KiB/1001msec); 0 zone resets 00:09:13.363 slat (nsec): min=11263, max=47020, avg=14362.64, stdev=3490.75 00:09:13.363 clat (usec): min=133, max=793, avg=194.46, stdev=35.46 00:09:13.363 lat (usec): min=147, max=807, avg=208.82, stdev=35.39 00:09:13.363 clat percentiles (usec): 00:09:13.363 | 1.00th=[ 143], 5.00th=[ 153], 10.00th=[ 159], 20.00th=[ 167], 00:09:13.363 | 30.00th=[ 174], 40.00th=[ 182], 50.00th=[ 190], 60.00th=[ 200], 00:09:13.363 | 70.00th=[ 208], 80.00th=[ 219], 90.00th=[ 233], 95.00th=[ 245], 00:09:13.363 | 99.00th=[ 293], 99.50th=[ 306], 99.90th=[ 334], 99.95th=[ 750], 00:09:13.364 | 99.99th=[ 791] 00:09:13.364 bw ( KiB/s): min= 8192, max= 8192, per=25.83%, avg=8192.00, stdev= 0.00, samples=1 00:09:13.364 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=1 00:09:13.364 lat (usec) : 250=61.03%, 500=38.16%, 750=0.75%, 1000=0.05% 00:09:13.364 cpu : usr=4.00%, sys=6.20%, ctx=3723, majf=0, minf=1 00:09:13.364 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:13.364 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.364 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:13.364 issued rwts: total=1673,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:13.364 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:13.364 00:09:13.364 Run status group 0 (all jobs): 00:09:13.364 READ: bw=25.7MiB/s (26.9MB/s), 5975KiB/s-7429KiB/s (6118kB/s-7607kB/s), io=26.5MiB (27.8MB), run=1001-1033msec 00:09:13.364 WRITE: bw=31.0MiB/s (32.5MB/s), 7930KiB/s-8184KiB/s (8121kB/s-8380kB/s), io=32.0MiB (33.6MB), run=1001-1033msec 00:09:13.364 00:09:13.364 Disk stats (read/write): 00:09:13.364 nvme0n1: ios=1560/1627, merge=0/0, ticks=1350/314, in_queue=1664, util=85.87% 00:09:13.364 nvme0n2: ios=1559/1715, merge=0/0, ticks=1396/292, in_queue=1688, util=89.73% 00:09:13.364 nvme0n3: ios=1767/2048, merge=0/0, ticks=475/339, in_queue=814, util=94.58% 00:09:13.364 nvme0n4: ios=1544/1536, merge=0/0, ticks=554/302, in_queue=856, util=95.49% 00:09:13.364 10:26:53 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:09:13.364 [global] 00:09:13.364 thread=1 00:09:13.364 invalidate=1 00:09:13.364 rw=randwrite 00:09:13.364 time_based=1 00:09:13.364 runtime=1 00:09:13.364 ioengine=libaio 00:09:13.364 direct=1 00:09:13.364 bs=4096 00:09:13.364 iodepth=1 00:09:13.364 norandommap=0 00:09:13.364 numjobs=1 00:09:13.364 00:09:13.364 verify_dump=1 00:09:13.364 verify_backlog=512 00:09:13.364 verify_state_save=0 00:09:13.364 do_verify=1 00:09:13.364 verify=crc32c-intel 00:09:13.364 [job0] 00:09:13.364 filename=/dev/nvme0n1 00:09:13.364 [job1] 00:09:13.364 filename=/dev/nvme0n2 00:09:13.364 [job2] 00:09:13.364 filename=/dev/nvme0n3 00:09:13.364 [job3] 00:09:13.364 filename=/dev/nvme0n4 00:09:13.364 Could not set queue depth (nvme0n1) 00:09:13.364 Could not set queue depth (nvme0n2) 00:09:13.364 Could not set queue depth (nvme0n3) 00:09:13.364 Could not set queue depth (nvme0n4) 00:09:13.364 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.364 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.364 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.364 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:13.364 fio-3.35 00:09:13.364 Starting 4 threads 00:09:14.741 00:09:14.741 job0: (groupid=0, jobs=1): err= 0: pid=3106046: Wed Nov 20 10:26:55 2024 00:09:14.741 read: IOPS=23, BW=93.8KiB/s (96.1kB/s)(96.0KiB/1023msec) 00:09:14.741 slat (nsec): min=10398, max=24457, avg=21035.33, stdev=3927.18 00:09:14.741 clat (usec): min=234, max=41028, avg=37558.30, stdev=11495.57 00:09:14.741 lat (usec): min=256, max=41051, avg=37579.34, stdev=11495.10 00:09:14.741 clat percentiles (usec): 00:09:14.741 | 1.00th=[ 235], 5.00th=[ 237], 10.00th=[40633], 20.00th=[41157], 00:09:14.741 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:14.741 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:14.741 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:14.741 | 99.99th=[41157] 00:09:14.741 write: IOPS=500, BW=2002KiB/s (2050kB/s)(2048KiB/1023msec); 0 zone resets 00:09:14.741 slat (nsec): min=10615, max=38022, avg=12289.19, stdev=2260.84 00:09:14.741 clat (usec): min=143, max=360, avg=219.49, stdev=19.90 00:09:14.741 lat (usec): min=155, max=372, avg=231.78, stdev=19.96 00:09:14.741 clat percentiles (usec): 00:09:14.741 | 1.00th=[ 161], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 206], 00:09:14.741 | 30.00th=[ 210], 40.00th=[ 215], 50.00th=[ 219], 60.00th=[ 223], 00:09:14.741 | 70.00th=[ 229], 80.00th=[ 233], 90.00th=[ 241], 95.00th=[ 245], 00:09:14.741 | 99.00th=[ 265], 99.50th=[ 338], 99.90th=[ 359], 99.95th=[ 359], 00:09:14.741 | 99.99th=[ 359] 00:09:14.741 bw ( KiB/s): min= 4096, max= 4096, per=18.90%, avg=4096.00, stdev= 0.00, samples=1 00:09:14.741 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:14.741 lat (usec) : 250=93.28%, 500=2.61% 00:09:14.741 lat (msec) : 50=4.10% 00:09:14.741 cpu : usr=0.20%, sys=1.17%, ctx=538, majf=0, minf=1 00:09:14.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.741 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.741 job1: (groupid=0, jobs=1): err= 0: pid=3106054: Wed Nov 20 10:26:55 2024 00:09:14.741 read: IOPS=21, BW=85.9KiB/s (88.0kB/s)(88.0KiB/1024msec) 00:09:14.741 slat (nsec): min=9906, max=23692, avg=12487.36, stdev=3543.30 00:09:14.741 clat (usec): min=40668, max=41090, avg=40972.23, stdev=76.84 00:09:14.741 lat (usec): min=40679, max=41102, avg=40984.71, stdev=76.76 00:09:14.741 clat percentiles (usec): 00:09:14.741 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:09:14.741 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:09:14.741 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:09:14.741 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:09:14.741 | 99.99th=[41157] 00:09:14.741 write: IOPS=500, BW=2000KiB/s (2048kB/s)(2048KiB/1024msec); 0 zone resets 00:09:14.741 slat (nsec): min=10197, max=51289, avg=11704.01, stdev=2351.53 00:09:14.741 clat (usec): min=151, max=350, avg=224.53, stdev=25.41 00:09:14.741 lat (usec): min=163, max=361, avg=236.24, stdev=25.60 00:09:14.741 clat percentiles (usec): 00:09:14.741 | 1.00th=[ 163], 5.00th=[ 196], 10.00th=[ 204], 20.00th=[ 208], 00:09:14.741 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 227], 00:09:14.741 | 70.00th=[ 233], 80.00th=[ 239], 90.00th=[ 251], 95.00th=[ 273], 00:09:14.741 | 99.00th=[ 314], 99.50th=[ 330], 99.90th=[ 351], 99.95th=[ 351], 00:09:14.741 | 99.99th=[ 351] 00:09:14.741 bw ( KiB/s): min= 4096, max= 4096, per=18.90%, avg=4096.00, stdev= 0.00, samples=1 00:09:14.741 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:09:14.741 lat (usec) : 250=86.14%, 500=9.74% 00:09:14.741 lat (msec) : 50=4.12% 00:09:14.741 cpu : usr=0.59%, sys=0.59%, ctx=534, majf=0, minf=2 00:09:14.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.741 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.741 job2: (groupid=0, jobs=1): err= 0: pid=3106059: Wed Nov 20 10:26:55 2024 00:09:14.741 read: IOPS=1534, BW=6138KiB/s (6285kB/s)(6144KiB/1001msec) 00:09:14.741 slat (nsec): min=6762, max=24245, avg=7726.03, stdev=1100.99 00:09:14.741 clat (usec): min=184, max=41028, avg=411.32, stdev=2533.34 00:09:14.741 lat (usec): min=191, max=41052, avg=419.04, stdev=2533.90 00:09:14.741 clat percentiles (usec): 00:09:14.741 | 1.00th=[ 196], 5.00th=[ 208], 10.00th=[ 215], 20.00th=[ 227], 00:09:14.741 | 30.00th=[ 237], 40.00th=[ 245], 50.00th=[ 253], 60.00th=[ 262], 00:09:14.741 | 70.00th=[ 269], 80.00th=[ 277], 90.00th=[ 285], 95.00th=[ 293], 00:09:14.741 | 99.00th=[ 400], 99.50th=[ 416], 99.90th=[41157], 99.95th=[41157], 00:09:14.741 | 99.99th=[41157] 00:09:14.741 write: IOPS=1963, BW=7852KiB/s (8041kB/s)(7860KiB/1001msec); 0 zone resets 00:09:14.741 slat (nsec): min=9624, max=37953, avg=10751.70, stdev=1233.59 00:09:14.741 clat (usec): min=117, max=355, avg=166.56, stdev=38.20 00:09:14.741 lat (usec): min=127, max=370, avg=177.32, stdev=38.35 00:09:14.741 clat percentiles (usec): 00:09:14.741 | 1.00th=[ 124], 5.00th=[ 130], 10.00th=[ 135], 20.00th=[ 139], 00:09:14.741 | 30.00th=[ 143], 40.00th=[ 147], 50.00th=[ 151], 60.00th=[ 157], 00:09:14.741 | 70.00th=[ 165], 80.00th=[ 210], 90.00th=[ 227], 95.00th=[ 237], 00:09:14.741 | 99.00th=[ 265], 99.50th=[ 322], 99.90th=[ 343], 99.95th=[ 355], 00:09:14.741 | 99.99th=[ 355] 00:09:14.741 bw ( KiB/s): min= 4224, max= 4224, per=19.49%, avg=4224.00, stdev= 0.00, samples=1 00:09:14.741 iops : min= 1056, max= 1056, avg=1056.00, stdev= 0.00, samples=1 00:09:14.741 lat (usec) : 250=75.95%, 500=23.88% 00:09:14.741 lat (msec) : 50=0.17% 00:09:14.741 cpu : usr=1.50%, sys=3.60%, ctx=3502, majf=0, minf=1 00:09:14.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.741 issued rwts: total=1536,1965,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.741 job3: (groupid=0, jobs=1): err= 0: pid=3106066: Wed Nov 20 10:26:55 2024 00:09:14.741 read: IOPS=2238, BW=8955KiB/s (9170kB/s)(8964KiB/1001msec) 00:09:14.741 slat (nsec): min=6672, max=28535, avg=8040.01, stdev=1339.16 00:09:14.741 clat (usec): min=190, max=530, avg=251.64, stdev=29.94 00:09:14.741 lat (usec): min=197, max=557, avg=259.68, stdev=30.08 00:09:14.741 clat percentiles (usec): 00:09:14.741 | 1.00th=[ 198], 5.00th=[ 210], 10.00th=[ 219], 20.00th=[ 229], 00:09:14.741 | 30.00th=[ 237], 40.00th=[ 243], 50.00th=[ 249], 60.00th=[ 255], 00:09:14.741 | 70.00th=[ 265], 80.00th=[ 273], 90.00th=[ 285], 95.00th=[ 293], 00:09:14.741 | 99.00th=[ 392], 99.50th=[ 408], 99.90th=[ 420], 99.95th=[ 429], 00:09:14.741 | 99.99th=[ 529] 00:09:14.741 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:09:14.741 slat (nsec): min=9719, max=41136, avg=10978.02, stdev=1367.00 00:09:14.741 clat (usec): min=112, max=302, avg=148.20, stdev=15.12 00:09:14.741 lat (usec): min=122, max=335, avg=159.18, stdev=15.54 00:09:14.741 clat percentiles (usec): 00:09:14.741 | 1.00th=[ 120], 5.00th=[ 126], 10.00th=[ 131], 20.00th=[ 137], 00:09:14.741 | 30.00th=[ 141], 40.00th=[ 145], 50.00th=[ 147], 60.00th=[ 151], 00:09:14.741 | 70.00th=[ 155], 80.00th=[ 161], 90.00th=[ 167], 95.00th=[ 172], 00:09:14.741 | 99.00th=[ 184], 99.50th=[ 190], 99.90th=[ 277], 99.95th=[ 285], 00:09:14.741 | 99.99th=[ 302] 00:09:14.741 bw ( KiB/s): min=11888, max=11888, per=54.84%, avg=11888.00, stdev= 0.00, samples=1 00:09:14.741 iops : min= 2972, max= 2972, avg=2972.00, stdev= 0.00, samples=1 00:09:14.741 lat (usec) : 250=77.61%, 500=22.37%, 750=0.02% 00:09:14.741 cpu : usr=2.20%, sys=5.00%, ctx=4802, majf=0, minf=1 00:09:14.741 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:14.741 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.741 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:14.741 issued rwts: total=2241,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:14.741 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:14.741 00:09:14.741 Run status group 0 (all jobs): 00:09:14.741 READ: bw=14.6MiB/s (15.3MB/s), 85.9KiB/s-8955KiB/s (88.0kB/s-9170kB/s), io=14.9MiB (15.7MB), run=1001-1024msec 00:09:14.741 WRITE: bw=21.2MiB/s (22.2MB/s), 2000KiB/s-9.99MiB/s (2048kB/s-10.5MB/s), io=21.7MiB (22.7MB), run=1001-1024msec 00:09:14.741 00:09:14.741 Disk stats (read/write): 00:09:14.741 nvme0n1: ios=41/512, merge=0/0, ticks=1518/102, in_queue=1620, util=84.95% 00:09:14.741 nvme0n2: ios=66/512, merge=0/0, ticks=708/111, in_queue=819, util=85.58% 00:09:14.741 nvme0n3: ios=1046/1508, merge=0/0, ticks=1409/252, in_queue=1661, util=92.72% 00:09:14.741 nvme0n4: ios=1795/2048, merge=0/0, ticks=1344/292, in_queue=1636, util=97.45% 00:09:14.741 10:26:55 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:09:14.741 [global] 00:09:14.741 thread=1 00:09:14.741 invalidate=1 00:09:14.741 rw=write 00:09:14.741 time_based=1 00:09:14.741 runtime=1 00:09:14.742 ioengine=libaio 00:09:14.742 direct=1 00:09:14.742 bs=4096 00:09:14.742 iodepth=128 00:09:14.742 norandommap=0 00:09:14.742 numjobs=1 00:09:14.742 00:09:14.742 verify_dump=1 00:09:14.742 verify_backlog=512 00:09:14.742 verify_state_save=0 00:09:14.742 do_verify=1 00:09:14.742 verify=crc32c-intel 00:09:14.742 [job0] 00:09:14.742 filename=/dev/nvme0n1 00:09:14.742 [job1] 00:09:14.742 filename=/dev/nvme0n2 00:09:14.742 [job2] 00:09:14.742 filename=/dev/nvme0n3 00:09:14.742 [job3] 00:09:14.742 filename=/dev/nvme0n4 00:09:14.742 Could not set queue depth (nvme0n1) 00:09:14.742 Could not set queue depth (nvme0n2) 00:09:14.742 Could not set queue depth (nvme0n3) 00:09:14.742 Could not set queue depth (nvme0n4) 00:09:15.000 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.000 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.000 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.000 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:15.000 fio-3.35 00:09:15.000 Starting 4 threads 00:09:16.377 00:09:16.377 job0: (groupid=0, jobs=1): err= 0: pid=3106487: Wed Nov 20 10:26:56 2024 00:09:16.377 read: IOPS=5278, BW=20.6MiB/s (21.6MB/s)(20.7MiB/1002msec) 00:09:16.377 slat (nsec): min=1264, max=21015k, avg=94759.86, stdev=606944.79 00:09:16.377 clat (usec): min=677, max=69426, avg=11736.86, stdev=7992.04 00:09:16.377 lat (usec): min=3793, max=69434, avg=11831.62, stdev=8057.30 00:09:16.377 clat percentiles (usec): 00:09:16.377 | 1.00th=[ 7111], 5.00th=[ 8291], 10.00th=[ 8848], 20.00th=[ 9634], 00:09:16.377 | 30.00th=[ 9896], 40.00th=[10028], 50.00th=[10159], 60.00th=[10290], 00:09:16.377 | 70.00th=[10552], 80.00th=[11076], 90.00th=[11863], 95.00th=[15008], 00:09:16.377 | 99.00th=[57934], 99.50th=[61604], 99.90th=[66847], 99.95th=[69731], 00:09:16.377 | 99.99th=[69731] 00:09:16.377 write: IOPS=5620, BW=22.0MiB/s (23.0MB/s)(22.0MiB/1002msec); 0 zone resets 00:09:16.377 slat (usec): min=2, max=6186, avg=83.02, stdev=389.48 00:09:16.377 clat (usec): min=6728, max=43734, avg=11489.50, stdev=4823.56 00:09:16.377 lat (usec): min=6743, max=43745, avg=11572.53, stdev=4858.30 00:09:16.377 clat percentiles (usec): 00:09:16.377 | 1.00th=[ 7898], 5.00th=[ 9372], 10.00th=[ 9634], 20.00th=[ 9896], 00:09:16.377 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10159], 60.00th=[10290], 00:09:16.377 | 70.00th=[10421], 80.00th=[10814], 90.00th=[12911], 95.00th=[21365], 00:09:16.377 | 99.00th=[38536], 99.50th=[40109], 99.90th=[43779], 99.95th=[43779], 00:09:16.377 | 99.99th=[43779] 00:09:16.377 bw ( KiB/s): min=20480, max=24576, per=30.32%, avg=22528.00, stdev=2896.31, samples=2 00:09:16.377 iops : min= 5120, max= 6144, avg=5632.00, stdev=724.08, samples=2 00:09:16.377 lat (usec) : 750=0.01% 00:09:16.377 lat (msec) : 4=0.38%, 10=33.78%, 20=61.01%, 50=3.65%, 100=1.16% 00:09:16.377 cpu : usr=4.80%, sys=6.29%, ctx=618, majf=0, minf=1 00:09:16.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:09:16.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.377 issued rwts: total=5289,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.377 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.377 job1: (groupid=0, jobs=1): err= 0: pid=3106504: Wed Nov 20 10:26:56 2024 00:09:16.377 read: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec) 00:09:16.377 slat (nsec): min=1170, max=10486k, avg=95712.24, stdev=650334.52 00:09:16.377 clat (usec): min=5165, max=29943, avg=11713.28, stdev=3510.68 00:09:16.377 lat (usec): min=5171, max=29946, avg=11808.99, stdev=3562.57 00:09:16.377 clat percentiles (usec): 00:09:16.377 | 1.00th=[ 6063], 5.00th=[ 8225], 10.00th=[ 8979], 20.00th=[ 9896], 00:09:16.377 | 30.00th=[10552], 40.00th=[10814], 50.00th=[11076], 60.00th=[11338], 00:09:16.377 | 70.00th=[11600], 80.00th=[12256], 90.00th=[14484], 95.00th=[19268], 00:09:16.377 | 99.00th=[26870], 99.50th=[28705], 99.90th=[30016], 99.95th=[30016], 00:09:16.377 | 99.99th=[30016] 00:09:16.377 write: IOPS=4280, BW=16.7MiB/s (17.5MB/s)(16.8MiB/1005msec); 0 zone resets 00:09:16.377 slat (usec): min=2, max=10570, avg=130.35, stdev=740.95 00:09:16.377 clat (usec): min=1316, max=68397, avg=18417.85, stdev=15218.00 00:09:16.377 lat (usec): min=1327, max=68401, avg=18548.20, stdev=15310.06 00:09:16.377 clat percentiles (usec): 00:09:16.377 | 1.00th=[ 3523], 5.00th=[ 5342], 10.00th=[ 7373], 20.00th=[ 9241], 00:09:16.377 | 30.00th=[10159], 40.00th=[10421], 50.00th=[11207], 60.00th=[13960], 00:09:16.377 | 70.00th=[20055], 80.00th=[21103], 90.00th=[47449], 95.00th=[58459], 00:09:16.377 | 99.00th=[61080], 99.50th=[66323], 99.90th=[68682], 99.95th=[68682], 00:09:16.377 | 99.99th=[68682] 00:09:16.377 bw ( KiB/s): min=12912, max=20480, per=22.47%, avg=16696.00, stdev=5351.38, samples=2 00:09:16.377 iops : min= 3228, max= 5120, avg=4174.00, stdev=1337.85, samples=2 00:09:16.377 lat (msec) : 2=0.08%, 4=1.19%, 10=21.41%, 20=59.65%, 50=13.17% 00:09:16.377 lat (msec) : 100=4.50% 00:09:16.377 cpu : usr=3.88%, sys=4.68%, ctx=394, majf=0, minf=2 00:09:16.377 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:09:16.377 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.377 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.377 issued rwts: total=4096,4302,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.377 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.377 job2: (groupid=0, jobs=1): err= 0: pid=3106525: Wed Nov 20 10:26:56 2024 00:09:16.377 read: IOPS=3041, BW=11.9MiB/s (12.5MB/s)(12.0MiB/1010msec) 00:09:16.377 slat (nsec): min=1631, max=12340k, avg=116499.13, stdev=761272.30 00:09:16.377 clat (usec): min=3378, max=40827, avg=14388.84, stdev=5453.77 00:09:16.378 lat (usec): min=3384, max=40832, avg=14505.34, stdev=5486.28 00:09:16.378 clat percentiles (usec): 00:09:16.378 | 1.00th=[ 5604], 5.00th=[ 9765], 10.00th=[10290], 20.00th=[11469], 00:09:16.378 | 30.00th=[11731], 40.00th=[12256], 50.00th=[12387], 60.00th=[12649], 00:09:16.378 | 70.00th=[14746], 80.00th=[16188], 90.00th=[22938], 95.00th=[26870], 00:09:16.378 | 99.00th=[35390], 99.50th=[35390], 99.90th=[35390], 99.95th=[35390], 00:09:16.378 | 99.99th=[40633] 00:09:16.378 write: IOPS=3486, BW=13.6MiB/s (14.3MB/s)(13.8MiB/1010msec); 0 zone resets 00:09:16.378 slat (usec): min=2, max=18463, avg=171.19, stdev=986.68 00:09:16.378 clat (usec): min=2513, max=87319, avg=23753.06, stdev=15812.83 00:09:16.378 lat (usec): min=2517, max=87323, avg=23924.25, stdev=15902.17 00:09:16.378 clat percentiles (usec): 00:09:16.378 | 1.00th=[ 4293], 5.00th=[ 6849], 10.00th=[ 9896], 20.00th=[10945], 00:09:16.378 | 30.00th=[14877], 40.00th=[19006], 50.00th=[20579], 60.00th=[20841], 00:09:16.378 | 70.00th=[21365], 80.00th=[33162], 90.00th=[48497], 95.00th=[54264], 00:09:16.378 | 99.00th=[84411], 99.50th=[86508], 99.90th=[87557], 99.95th=[87557], 00:09:16.378 | 99.99th=[87557] 00:09:16.378 bw ( KiB/s): min=12760, max=14384, per=18.26%, avg=13572.00, stdev=1148.34, samples=2 00:09:16.378 iops : min= 3190, max= 3596, avg=3393.00, stdev=287.09, samples=2 00:09:16.378 lat (msec) : 4=0.73%, 10=8.58%, 20=53.95%, 50=31.59%, 100=5.14% 00:09:16.378 cpu : usr=2.78%, sys=4.76%, ctx=367, majf=0, minf=1 00:09:16.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.0% 00:09:16.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.378 issued rwts: total=3072,3521,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.378 job3: (groupid=0, jobs=1): err= 0: pid=3106531: Wed Nov 20 10:26:56 2024 00:09:16.378 read: IOPS=5104, BW=19.9MiB/s (20.9MB/s)(20.0MiB/1003msec) 00:09:16.378 slat (nsec): min=1260, max=18581k, avg=90944.62, stdev=583176.26 00:09:16.378 clat (usec): min=6880, max=30281, avg=12049.50, stdev=2846.22 00:09:16.378 lat (usec): min=6886, max=30307, avg=12140.44, stdev=2886.01 00:09:16.378 clat percentiles (usec): 00:09:16.378 | 1.00th=[ 8225], 5.00th=[ 9634], 10.00th=[10290], 20.00th=[10814], 00:09:16.378 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11469], 60.00th=[11600], 00:09:16.378 | 70.00th=[11863], 80.00th=[12387], 90.00th=[13566], 95.00th=[18482], 00:09:16.378 | 99.00th=[24773], 99.50th=[24773], 99.90th=[24773], 99.95th=[28181], 00:09:16.378 | 99.99th=[30278] 00:09:16.378 write: IOPS=5292, BW=20.7MiB/s (21.7MB/s)(20.7MiB/1003msec); 0 zone resets 00:09:16.378 slat (usec): min=2, max=15820, avg=94.01, stdev=613.98 00:09:16.378 clat (usec): min=2674, max=38732, avg=12322.93, stdev=3533.36 00:09:16.378 lat (usec): min=3331, max=38756, avg=12416.94, stdev=3598.14 00:09:16.378 clat percentiles (usec): 00:09:16.378 | 1.00th=[ 6521], 5.00th=[ 8848], 10.00th=[10159], 20.00th=[10814], 00:09:16.378 | 30.00th=[11076], 40.00th=[11207], 50.00th=[11207], 60.00th=[11338], 00:09:16.378 | 70.00th=[11600], 80.00th=[12387], 90.00th=[19530], 95.00th=[20841], 00:09:16.378 | 99.00th=[25035], 99.50th=[25035], 99.90th=[25822], 99.95th=[36439], 00:09:16.378 | 99.99th=[38536] 00:09:16.378 bw ( KiB/s): min=20480, max=20968, per=27.89%, avg=20724.00, stdev=345.07, samples=2 00:09:16.378 iops : min= 5120, max= 5242, avg=5181.00, stdev=86.27, samples=2 00:09:16.378 lat (msec) : 4=0.14%, 10=7.57%, 20=85.47%, 50=6.82% 00:09:16.378 cpu : usr=3.99%, sys=6.79%, ctx=495, majf=0, minf=1 00:09:16.378 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:09:16.378 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:16.378 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:16.378 issued rwts: total=5120,5308,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:16.378 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:16.378 00:09:16.378 Run status group 0 (all jobs): 00:09:16.378 READ: bw=68.0MiB/s (71.3MB/s), 11.9MiB/s-20.6MiB/s (12.5MB/s-21.6MB/s), io=68.7MiB (72.0MB), run=1002-1010msec 00:09:16.378 WRITE: bw=72.6MiB/s (76.1MB/s), 13.6MiB/s-22.0MiB/s (14.3MB/s-23.0MB/s), io=73.3MiB (76.9MB), run=1002-1010msec 00:09:16.378 00:09:16.378 Disk stats (read/write): 00:09:16.378 nvme0n1: ios=4601/4608, merge=0/0, ticks=19114/16063, in_queue=35177, util=85.67% 00:09:16.378 nvme0n2: ios=3625/3599, merge=0/0, ticks=36393/56148, in_queue=92541, util=90.35% 00:09:16.378 nvme0n3: ios=2578/2879, merge=0/0, ticks=32212/64728, in_queue=96940, util=93.31% 00:09:16.378 nvme0n4: ios=4245/4608, merge=0/0, ticks=21869/27120, in_queue=48989, util=94.10% 00:09:16.378 10:26:56 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:09:16.378 [global] 00:09:16.378 thread=1 00:09:16.378 invalidate=1 00:09:16.378 rw=randwrite 00:09:16.378 time_based=1 00:09:16.378 runtime=1 00:09:16.378 ioengine=libaio 00:09:16.378 direct=1 00:09:16.378 bs=4096 00:09:16.378 iodepth=128 00:09:16.378 norandommap=0 00:09:16.378 numjobs=1 00:09:16.378 00:09:16.378 verify_dump=1 00:09:16.378 verify_backlog=512 00:09:16.378 verify_state_save=0 00:09:16.378 do_verify=1 00:09:16.378 verify=crc32c-intel 00:09:16.378 [job0] 00:09:16.378 filename=/dev/nvme0n1 00:09:16.378 [job1] 00:09:16.378 filename=/dev/nvme0n2 00:09:16.378 [job2] 00:09:16.378 filename=/dev/nvme0n3 00:09:16.378 [job3] 00:09:16.378 filename=/dev/nvme0n4 00:09:16.378 Could not set queue depth (nvme0n1) 00:09:16.378 Could not set queue depth (nvme0n2) 00:09:16.378 Could not set queue depth (nvme0n3) 00:09:16.378 Could not set queue depth (nvme0n4) 00:09:16.636 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:16.636 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:16.636 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:16.636 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:09:16.636 fio-3.35 00:09:16.636 Starting 4 threads 00:09:18.015 00:09:18.015 job0: (groupid=0, jobs=1): err= 0: pid=3106969: Wed Nov 20 10:26:58 2024 00:09:18.015 read: IOPS=5666, BW=22.1MiB/s (23.2MB/s)(22.2MiB/1005msec) 00:09:18.015 slat (nsec): min=1394, max=15692k, avg=81010.93, stdev=546104.97 00:09:18.015 clat (usec): min=1519, max=26333, avg=10391.12, stdev=2556.81 00:09:18.015 lat (usec): min=4356, max=26534, avg=10472.13, stdev=2582.16 00:09:18.015 clat percentiles (usec): 00:09:18.015 | 1.00th=[ 6521], 5.00th=[ 7635], 10.00th=[ 8455], 20.00th=[ 9372], 00:09:18.015 | 30.00th=[ 9634], 40.00th=[ 9634], 50.00th=[ 9896], 60.00th=[10028], 00:09:18.015 | 70.00th=[10290], 80.00th=[11469], 90.00th=[12387], 95.00th=[13698], 00:09:18.015 | 99.00th=[24773], 99.50th=[25297], 99.90th=[26346], 99.95th=[26346], 00:09:18.015 | 99.99th=[26346] 00:09:18.016 write: IOPS=6113, BW=23.9MiB/s (25.0MB/s)(24.0MiB/1005msec); 0 zone resets 00:09:18.016 slat (usec): min=2, max=13808, avg=81.02, stdev=535.58 00:09:18.016 clat (usec): min=1586, max=32716, avg=11062.00, stdev=3288.52 00:09:18.016 lat (usec): min=1617, max=32856, avg=11143.01, stdev=3338.71 00:09:18.016 clat percentiles (usec): 00:09:18.016 | 1.00th=[ 5080], 5.00th=[ 7767], 10.00th=[ 9110], 20.00th=[ 9503], 00:09:18.016 | 30.00th=[ 9634], 40.00th=[ 9896], 50.00th=[10028], 60.00th=[10290], 00:09:18.016 | 70.00th=[11207], 80.00th=[12387], 90.00th=[15664], 95.00th=[19006], 00:09:18.016 | 99.00th=[23200], 99.50th=[23725], 99.90th=[26608], 99.95th=[29230], 00:09:18.016 | 99.99th=[32637] 00:09:18.016 bw ( KiB/s): min=24056, max=24576, per=31.85%, avg=24316.00, stdev=367.70, samples=2 00:09:18.016 iops : min= 6014, max= 6144, avg=6079.00, stdev=91.92, samples=2 00:09:18.016 lat (msec) : 2=0.03%, 4=0.05%, 10=55.38%, 20=41.53%, 50=3.01% 00:09:18.016 cpu : usr=5.28%, sys=7.97%, ctx=538, majf=0, minf=1 00:09:18.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.5% 00:09:18.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.016 issued rwts: total=5695,6144,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.016 job1: (groupid=0, jobs=1): err= 0: pid=3106982: Wed Nov 20 10:26:58 2024 00:09:18.016 read: IOPS=4585, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1005msec) 00:09:18.016 slat (nsec): min=1312, max=23291k, avg=114853.73, stdev=848314.79 00:09:18.016 clat (usec): min=4885, max=62342, avg=13630.24, stdev=9249.30 00:09:18.016 lat (usec): min=4895, max=62367, avg=13745.10, stdev=9327.53 00:09:18.016 clat percentiles (usec): 00:09:18.016 | 1.00th=[ 6587], 5.00th=[ 7373], 10.00th=[ 8094], 20.00th=[ 9372], 00:09:18.016 | 30.00th=[ 9634], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10290], 00:09:18.016 | 70.00th=[11600], 80.00th=[13304], 90.00th=[26346], 95.00th=[36963], 00:09:18.016 | 99.00th=[50070], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:09:18.016 | 99.99th=[62129] 00:09:18.016 write: IOPS=4838, BW=18.9MiB/s (19.8MB/s)(19.0MiB/1005msec); 0 zone resets 00:09:18.016 slat (usec): min=2, max=25444, avg=89.81, stdev=744.06 00:09:18.016 clat (usec): min=548, max=65667, avg=13252.07, stdev=8230.69 00:09:18.016 lat (usec): min=669, max=65690, avg=13341.88, stdev=8291.44 00:09:18.016 clat percentiles (usec): 00:09:18.016 | 1.00th=[ 6128], 5.00th=[ 7701], 10.00th=[ 9241], 20.00th=[ 9765], 00:09:18.016 | 30.00th=[10028], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:09:18.016 | 70.00th=[11076], 80.00th=[13173], 90.00th=[26870], 95.00th=[31851], 00:09:18.016 | 99.00th=[50070], 99.50th=[54264], 99.90th=[54789], 99.95th=[54789], 00:09:18.016 | 99.99th=[65799] 00:09:18.016 bw ( KiB/s): min=12288, max=25600, per=24.82%, avg=18944.00, stdev=9413.01, samples=2 00:09:18.016 iops : min= 3072, max= 6400, avg=4736.00, stdev=2353.25, samples=2 00:09:18.016 lat (usec) : 750=0.01% 00:09:18.016 lat (msec) : 4=0.01%, 10=38.60%, 20=48.04%, 50=11.94%, 100=1.39% 00:09:18.016 cpu : usr=4.08%, sys=5.48%, ctx=587, majf=0, minf=1 00:09:18.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:09:18.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.016 issued rwts: total=4608,4863,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.016 job2: (groupid=0, jobs=1): err= 0: pid=3106998: Wed Nov 20 10:26:58 2024 00:09:18.016 read: IOPS=4135, BW=16.2MiB/s (16.9MB/s)(16.2MiB/1006msec) 00:09:18.016 slat (nsec): min=1091, max=10706k, avg=111042.45, stdev=705973.07 00:09:18.016 clat (usec): min=2498, max=30187, avg=14055.66, stdev=3545.48 00:09:18.016 lat (usec): min=6688, max=30189, avg=14166.70, stdev=3589.85 00:09:18.016 clat percentiles (usec): 00:09:18.016 | 1.00th=[ 7504], 5.00th=[ 8979], 10.00th=[10683], 20.00th=[11076], 00:09:18.016 | 30.00th=[11469], 40.00th=[13304], 50.00th=[13698], 60.00th=[14222], 00:09:18.016 | 70.00th=[14746], 80.00th=[16057], 90.00th=[19268], 95.00th=[20841], 00:09:18.016 | 99.00th=[26346], 99.50th=[27657], 99.90th=[27657], 99.95th=[30278], 00:09:18.016 | 99.99th=[30278] 00:09:18.016 write: IOPS=4580, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1006msec); 0 zone resets 00:09:18.016 slat (nsec): min=1778, max=15225k, avg=103034.90, stdev=612787.20 00:09:18.016 clat (usec): min=1010, max=29530, avg=15001.15, stdev=5088.65 00:09:18.016 lat (usec): min=1018, max=29558, avg=15104.19, stdev=5142.72 00:09:18.016 clat percentiles (usec): 00:09:18.016 | 1.00th=[ 5473], 5.00th=[ 8094], 10.00th=[ 9503], 20.00th=[10814], 00:09:18.016 | 30.00th=[11469], 40.00th=[12125], 50.00th=[13304], 60.00th=[16057], 00:09:18.016 | 70.00th=[18744], 80.00th=[21365], 90.00th=[22152], 95.00th=[22414], 00:09:18.016 | 99.00th=[25560], 99.50th=[28181], 99.90th=[28443], 99.95th=[28443], 00:09:18.016 | 99.99th=[29492] 00:09:18.016 bw ( KiB/s): min=15872, max=20480, per=23.81%, avg=18176.00, stdev=3258.35, samples=2 00:09:18.016 iops : min= 3968, max= 5120, avg=4544.00, stdev=814.59, samples=2 00:09:18.016 lat (msec) : 2=0.02%, 4=0.18%, 10=10.03%, 20=74.18%, 50=15.59% 00:09:18.016 cpu : usr=2.39%, sys=5.27%, ctx=432, majf=0, minf=2 00:09:18.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:09:18.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.016 issued rwts: total=4160,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.016 job3: (groupid=0, jobs=1): err= 0: pid=3107004: Wed Nov 20 10:26:58 2024 00:09:18.016 read: IOPS=3185, BW=12.4MiB/s (13.0MB/s)(12.5MiB/1002msec) 00:09:18.016 slat (nsec): min=1374, max=18503k, avg=152315.14, stdev=1082596.18 00:09:18.016 clat (usec): min=1652, max=53941, avg=20419.62, stdev=10623.44 00:09:18.016 lat (usec): min=1658, max=53948, avg=20571.93, stdev=10692.31 00:09:18.016 clat percentiles (usec): 00:09:18.016 | 1.00th=[ 4817], 5.00th=[10290], 10.00th=[11076], 20.00th=[12518], 00:09:18.016 | 30.00th=[13960], 40.00th=[15401], 50.00th=[18220], 60.00th=[20579], 00:09:18.016 | 70.00th=[22152], 80.00th=[24773], 90.00th=[34341], 95.00th=[49021], 00:09:18.016 | 99.00th=[53740], 99.50th=[53740], 99.90th=[53740], 99.95th=[53740], 00:09:18.016 | 99.99th=[53740] 00:09:18.016 write: IOPS=3576, BW=14.0MiB/s (14.7MB/s)(14.0MiB/1002msec); 0 zone resets 00:09:18.016 slat (usec): min=2, max=16541, avg=126.96, stdev=795.82 00:09:18.016 clat (usec): min=464, max=48684, avg=17248.29, stdev=8923.99 00:09:18.016 lat (usec): min=575, max=48699, avg=17375.25, stdev=9000.34 00:09:18.016 clat percentiles (usec): 00:09:18.016 | 1.00th=[ 6390], 5.00th=[ 9241], 10.00th=[10421], 20.00th=[11600], 00:09:18.016 | 30.00th=[12125], 40.00th=[12518], 50.00th=[13304], 60.00th=[14222], 00:09:18.016 | 70.00th=[18744], 80.00th=[21890], 90.00th=[31851], 95.00th=[40109], 00:09:18.016 | 99.00th=[45876], 99.50th=[47449], 99.90th=[48497], 99.95th=[48497], 00:09:18.016 | 99.99th=[48497] 00:09:18.016 bw ( KiB/s): min=12232, max=16384, per=18.74%, avg=14308.00, stdev=2935.91, samples=2 00:09:18.016 iops : min= 3058, max= 4096, avg=3577.00, stdev=733.98, samples=2 00:09:18.016 lat (usec) : 500=0.01% 00:09:18.016 lat (msec) : 2=0.30%, 10=6.18%, 20=58.49%, 50=32.90%, 100=2.13% 00:09:18.016 cpu : usr=1.90%, sys=5.49%, ctx=277, majf=0, minf=1 00:09:18.016 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.5%, >=64=99.1% 00:09:18.016 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:18.016 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:09:18.016 issued rwts: total=3192,3584,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:18.016 latency : target=0, window=0, percentile=100.00%, depth=128 00:09:18.016 00:09:18.016 Run status group 0 (all jobs): 00:09:18.016 READ: bw=68.6MiB/s (71.9MB/s), 12.4MiB/s-22.1MiB/s (13.0MB/s-23.2MB/s), io=69.0MiB (72.3MB), run=1002-1006msec 00:09:18.016 WRITE: bw=74.5MiB/s (78.2MB/s), 14.0MiB/s-23.9MiB/s (14.7MB/s-25.0MB/s), io=75.0MiB (78.6MB), run=1002-1006msec 00:09:18.016 00:09:18.016 Disk stats (read/write): 00:09:18.016 nvme0n1: ios=5169/5187, merge=0/0, ticks=28704/30725, in_queue=59429, util=89.78% 00:09:18.016 nvme0n2: ios=3607/4015, merge=0/0, ticks=29216/27572, in_queue=56788, util=97.16% 00:09:18.016 nvme0n3: ios=3641/4055, merge=0/0, ticks=30501/37009, in_queue=67510, util=90.84% 00:09:18.016 nvme0n4: ios=2583/3048, merge=0/0, ticks=29485/31535, in_queue=61020, util=98.22% 00:09:18.016 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:09:18.016 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3107092 00:09:18.016 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:09:18.016 10:26:58 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:09:18.016 [global] 00:09:18.016 thread=1 00:09:18.016 invalidate=1 00:09:18.016 rw=read 00:09:18.016 time_based=1 00:09:18.016 runtime=10 00:09:18.016 ioengine=libaio 00:09:18.016 direct=1 00:09:18.016 bs=4096 00:09:18.016 iodepth=1 00:09:18.016 norandommap=1 00:09:18.016 numjobs=1 00:09:18.016 00:09:18.016 [job0] 00:09:18.016 filename=/dev/nvme0n1 00:09:18.016 [job1] 00:09:18.016 filename=/dev/nvme0n2 00:09:18.016 [job2] 00:09:18.016 filename=/dev/nvme0n3 00:09:18.016 [job3] 00:09:18.016 filename=/dev/nvme0n4 00:09:18.016 Could not set queue depth (nvme0n1) 00:09:18.017 Could not set queue depth (nvme0n2) 00:09:18.017 Could not set queue depth (nvme0n3) 00:09:18.017 Could not set queue depth (nvme0n4) 00:09:18.275 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.275 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.275 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.275 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:09:18.275 fio-3.35 00:09:18.275 Starting 4 threads 00:09:20.803 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:09:21.060 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=27488256, buflen=4096 00:09:21.060 fio: pid=3107391, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:21.060 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:09:21.318 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=45748224, buflen=4096 00:09:21.318 fio: pid=3107390, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:21.318 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:21.318 10:27:01 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:09:21.576 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:21.576 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:09:21.576 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=11751424, buflen=4096 00:09:21.576 fio: pid=3107388, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:21.834 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=14639104, buflen=4096 00:09:21.834 fio: pid=3107389, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:09:21.834 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:21.834 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:09:21.834 00:09:21.834 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3107388: Wed Nov 20 10:27:02 2024 00:09:21.834 read: IOPS=905, BW=3622KiB/s (3709kB/s)(11.2MiB/3168msec) 00:09:21.834 slat (usec): min=6, max=25497, avg=19.59, stdev=502.34 00:09:21.834 clat (usec): min=172, max=42029, avg=1075.88, stdev=5649.39 00:09:21.834 lat (usec): min=179, max=42051, avg=1095.47, stdev=5672.26 00:09:21.834 clat percentiles (usec): 00:09:21.834 | 1.00th=[ 190], 5.00th=[ 212], 10.00th=[ 227], 20.00th=[ 239], 00:09:21.834 | 30.00th=[ 247], 40.00th=[ 255], 50.00th=[ 262], 60.00th=[ 269], 00:09:21.834 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 347], 95.00th=[ 416], 00:09:21.834 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[41681], 00:09:21.834 | 99.99th=[42206] 00:09:21.834 bw ( KiB/s): min= 96, max= 9189, per=12.17%, avg=3511.50, stdev=3703.40, samples=6 00:09:21.834 iops : min= 24, max= 2297, avg=877.83, stdev=925.77, samples=6 00:09:21.834 lat (usec) : 250=34.25%, 500=63.55%, 750=0.03%, 1000=0.03% 00:09:21.834 lat (msec) : 2=0.03%, 4=0.03%, 10=0.03%, 20=0.03%, 50=1.95% 00:09:21.834 cpu : usr=0.13%, sys=0.98%, ctx=2873, majf=0, minf=2 00:09:21.834 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.834 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.834 issued rwts: total=2870,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.834 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.834 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3107389: Wed Nov 20 10:27:02 2024 00:09:21.834 read: IOPS=1060, BW=4241KiB/s (4343kB/s)(14.0MiB/3371msec) 00:09:21.834 slat (usec): min=7, max=10800, avg=11.87, stdev=180.51 00:09:21.834 clat (usec): min=171, max=50137, avg=923.00, stdev=5225.13 00:09:21.834 lat (usec): min=179, max=51897, avg=934.86, stdev=5252.49 00:09:21.834 clat percentiles (usec): 00:09:21.834 | 1.00th=[ 188], 5.00th=[ 196], 10.00th=[ 200], 20.00th=[ 206], 00:09:21.834 | 30.00th=[ 212], 40.00th=[ 217], 50.00th=[ 223], 60.00th=[ 229], 00:09:21.834 | 70.00th=[ 245], 80.00th=[ 273], 90.00th=[ 293], 95.00th=[ 347], 00:09:21.834 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42730], 00:09:21.834 | 99.99th=[50070] 00:09:21.834 bw ( KiB/s): min= 176, max=17216, per=16.02%, avg=4625.67, stdev=6353.70, samples=6 00:09:21.834 iops : min= 44, max= 4304, avg=1156.33, stdev=1588.44, samples=6 00:09:21.834 lat (usec) : 250=71.58%, 500=26.41%, 750=0.17%, 1000=0.03% 00:09:21.834 lat (msec) : 2=0.03%, 4=0.06%, 20=0.06%, 50=1.62%, 100=0.03% 00:09:21.834 cpu : usr=0.53%, sys=1.84%, ctx=3577, majf=0, minf=1 00:09:21.834 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.834 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.834 issued rwts: total=3575,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.834 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.834 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3107390: Wed Nov 20 10:27:02 2024 00:09:21.834 read: IOPS=3798, BW=14.8MiB/s (15.6MB/s)(43.6MiB/2941msec) 00:09:21.834 slat (nsec): min=7145, max=47216, avg=8876.78, stdev=1773.13 00:09:21.834 clat (usec): min=171, max=40650, avg=250.42, stdev=386.75 00:09:21.834 lat (usec): min=179, max=40658, avg=259.29, stdev=386.75 00:09:21.834 clat percentiles (usec): 00:09:21.834 | 1.00th=[ 186], 5.00th=[ 194], 10.00th=[ 202], 20.00th=[ 212], 00:09:21.834 | 30.00th=[ 221], 40.00th=[ 233], 50.00th=[ 241], 60.00th=[ 247], 00:09:21.834 | 70.00th=[ 258], 80.00th=[ 269], 90.00th=[ 289], 95.00th=[ 338], 00:09:21.834 | 99.00th=[ 412], 99.50th=[ 433], 99.90th=[ 791], 99.95th=[ 1352], 00:09:21.834 | 99.99th=[ 2638] 00:09:21.834 bw ( KiB/s): min=14272, max=17280, per=53.04%, avg=15307.20, stdev=1220.59, samples=5 00:09:21.834 iops : min= 3568, max= 4320, avg=3826.80, stdev=305.15, samples=5 00:09:21.834 lat (usec) : 250=63.00%, 500=36.84%, 750=0.04%, 1000=0.04% 00:09:21.834 lat (msec) : 2=0.04%, 4=0.02%, 50=0.01% 00:09:21.834 cpu : usr=2.38%, sys=6.16%, ctx=11170, majf=0, minf=2 00:09:21.834 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.834 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.834 issued rwts: total=11170,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.834 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.834 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3107391: Wed Nov 20 10:27:02 2024 00:09:21.834 read: IOPS=2456, BW=9826KiB/s (10.1MB/s)(26.2MiB/2732msec) 00:09:21.834 slat (nsec): min=7141, max=41540, avg=8574.11, stdev=1833.46 00:09:21.834 clat (usec): min=189, max=41546, avg=392.49, stdev=2403.10 00:09:21.834 lat (usec): min=198, max=41569, avg=401.06, stdev=2403.90 00:09:21.834 clat percentiles (usec): 00:09:21.834 | 1.00th=[ 210], 5.00th=[ 223], 10.00th=[ 229], 20.00th=[ 237], 00:09:21.834 | 30.00th=[ 241], 40.00th=[ 243], 50.00th=[ 247], 60.00th=[ 251], 00:09:21.834 | 70.00th=[ 253], 80.00th=[ 258], 90.00th=[ 265], 95.00th=[ 269], 00:09:21.834 | 99.00th=[ 355], 99.50th=[ 482], 99.90th=[41157], 99.95th=[41157], 00:09:21.834 | 99.99th=[41681] 00:09:21.834 bw ( KiB/s): min= 104, max=15512, per=32.94%, avg=9507.20, stdev=7280.19, samples=5 00:09:21.834 iops : min= 26, max= 3878, avg=2376.80, stdev=1820.05, samples=5 00:09:21.834 lat (usec) : 250=59.65%, 500=39.85%, 750=0.03% 00:09:21.834 lat (msec) : 2=0.07%, 4=0.01%, 50=0.36% 00:09:21.834 cpu : usr=1.21%, sys=4.21%, ctx=6712, majf=0, minf=2 00:09:21.834 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:09:21.834 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.834 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:09:21.834 issued rwts: total=6712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:09:21.834 latency : target=0, window=0, percentile=100.00%, depth=1 00:09:21.834 00:09:21.834 Run status group 0 (all jobs): 00:09:21.834 READ: bw=28.2MiB/s (29.6MB/s), 3622KiB/s-14.8MiB/s (3709kB/s-15.6MB/s), io=95.0MiB (99.6MB), run=2732-3371msec 00:09:21.834 00:09:21.834 Disk stats (read/write): 00:09:21.834 nvme0n1: ios=2868/0, merge=0/0, ticks=3036/0, in_queue=3036, util=94.73% 00:09:21.834 nvme0n2: ios=3574/0, merge=0/0, ticks=3258/0, in_queue=3258, util=96.10% 00:09:21.834 nvme0n3: ios=10920/0, merge=0/0, ticks=2582/0, in_queue=2582, util=96.49% 00:09:21.834 nvme0n4: ios=6327/0, merge=0/0, ticks=2473/0, in_queue=2473, util=96.45% 00:09:21.834 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:21.834 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:09:22.092 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:22.092 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:09:22.353 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:22.353 10:27:02 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:09:22.611 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:09:22.611 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:09:22.869 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:09:22.869 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 3107092 00:09:22.869 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:09:22.869 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:09:22.869 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:09:22.869 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:09:22.869 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:09:22.869 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:09:22.869 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.869 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:09:22.869 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:09:22.869 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:09:22.869 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:09:22.869 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:09:22.869 nvmf hotplug test: fio failed as expected 00:09:22.869 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:09:23.127 rmmod nvme_tcp 00:09:23.127 rmmod nvme_fabrics 00:09:23.127 rmmod nvme_keyring 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 3104319 ']' 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 3104319 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3104319 ']' 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3104319 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:23.127 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3104319 00:09:23.386 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:23.386 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:23.386 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3104319' 00:09:23.386 killing process with pid 3104319 00:09:23.386 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3104319 00:09:23.386 10:27:03 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3104319 00:09:23.386 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:09:23.386 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:09:23.386 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@264 -- # local dev 00:09:23.386 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:09:23.386 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:23.386 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:23.386 10:27:04 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@130 -- # return 0 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/setup.sh@284 -- # iptr 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@542 -- # iptables-save 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@542 -- # iptables-restore 00:09:25.922 00:09:25.922 real 0m27.074s 00:09:25.922 user 1m47.198s 00:09:25.922 sys 0m8.956s 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:09:25.922 ************************************ 00:09:25.922 END TEST nvmf_fio_target 00:09:25.922 ************************************ 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:25.922 ************************************ 00:09:25.922 START TEST nvmf_bdevio 00:09:25.922 ************************************ 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:09:25.922 * Looking for test storage... 00:09:25.922 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:25.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.922 --rc genhtml_branch_coverage=1 00:09:25.922 --rc genhtml_function_coverage=1 00:09:25.922 --rc genhtml_legend=1 00:09:25.922 --rc geninfo_all_blocks=1 00:09:25.922 --rc geninfo_unexecuted_blocks=1 00:09:25.922 00:09:25.922 ' 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:25.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.922 --rc genhtml_branch_coverage=1 00:09:25.922 --rc genhtml_function_coverage=1 00:09:25.922 --rc genhtml_legend=1 00:09:25.922 --rc geninfo_all_blocks=1 00:09:25.922 --rc geninfo_unexecuted_blocks=1 00:09:25.922 00:09:25.922 ' 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:25.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.922 --rc genhtml_branch_coverage=1 00:09:25.922 --rc genhtml_function_coverage=1 00:09:25.922 --rc genhtml_legend=1 00:09:25.922 --rc geninfo_all_blocks=1 00:09:25.922 --rc geninfo_unexecuted_blocks=1 00:09:25.922 00:09:25.922 ' 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:25.922 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:25.922 --rc genhtml_branch_coverage=1 00:09:25.922 --rc genhtml_function_coverage=1 00:09:25.922 --rc genhtml_legend=1 00:09:25.922 --rc geninfo_all_blocks=1 00:09:25.922 --rc geninfo_unexecuted_blocks=1 00:09:25.922 00:09:25.922 ' 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:09:25.922 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:25.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # xtrace_disable 00:09:25.923 10:27:06 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@131 -- # pci_devs=() 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@131 -- # local -a pci_devs 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@132 -- # pci_net_devs=() 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@133 -- # pci_drivers=() 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@133 -- # local -A pci_drivers 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@135 -- # net_devs=() 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@135 -- # local -ga net_devs 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@136 -- # e810=() 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@136 -- # local -ga e810 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@137 -- # x722=() 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@137 -- # local -ga x722 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@138 -- # mlx=() 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@138 -- # local -ga mlx 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:09:32.490 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:32.491 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:32.491 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:32.491 Found net devices under 0000:86:00.0: cvl_0_0 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:32.491 Found net devices under 0000:86:00.1: cvl_0_1 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@262 -- # is_hw=yes 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@257 -- # create_target_ns 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:09:32.491 10.0.0.1 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:09:32.491 10.0.0.2 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:09:32.491 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 1 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator0 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:09:32.492 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:32.492 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.244 ms 00:09:32.492 00:09:32.492 --- 10.0.0.1 ping statistics --- 00:09:32.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.492 rtt min/avg/max/mdev = 0.244/0.244/0.244/0.000 ms 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:09:32.492 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:32.492 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:09:32.492 00:09:32.492 --- 10.0.0.2 ping statistics --- 00:09:32.492 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:32.492 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair++ )) 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # return 0 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator0 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator1 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # return 1 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev= 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@169 -- # return 0 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:32.492 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target1 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target1 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@109 -- # return 1 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev= 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@169 -- # return 0 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=3112231 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 3112231 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3112231 ']' 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:32.493 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:32.493 10:27:12 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.493 [2024-11-20 10:27:12.569155] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:09:32.493 [2024-11-20 10:27:12.569230] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:32.493 [2024-11-20 10:27:12.650651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:32.493 [2024-11-20 10:27:12.695770] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:32.493 [2024-11-20 10:27:12.695807] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:32.493 [2024-11-20 10:27:12.695814] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:32.493 [2024-11-20 10:27:12.695820] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:32.493 [2024-11-20 10:27:12.695825] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:32.493 [2024-11-20 10:27:12.697399] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:09:32.493 [2024-11-20 10:27:12.697424] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:09:32.493 [2024-11-20 10:27:12.697454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:09:32.493 [2024-11-20 10:27:12.697454] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:09:32.750 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:32.751 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:09:32.751 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:09:32.751 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:32.751 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.751 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:32.751 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:32.751 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.751 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:32.751 [2024-11-20 10:27:13.446172] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:32.751 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:32.751 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:32.751 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:32.751 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.008 Malloc0 00:09:33.008 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.008 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:33.008 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.008 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.008 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.008 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:33.008 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.008 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.008 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.008 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:33.008 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.008 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.009 [2024-11-20 10:27:13.520814] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:33.009 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.009 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:09:33.009 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:09:33.009 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:09:33.009 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:09:33.009 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:09:33.009 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:09:33.009 { 00:09:33.009 "params": { 00:09:33.009 "name": "Nvme$subsystem", 00:09:33.009 "trtype": "$TEST_TRANSPORT", 00:09:33.009 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:33.009 "adrfam": "ipv4", 00:09:33.009 "trsvcid": "$NVMF_PORT", 00:09:33.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:33.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:33.009 "hdgst": ${hdgst:-false}, 00:09:33.009 "ddgst": ${ddgst:-false} 00:09:33.009 }, 00:09:33.009 "method": "bdev_nvme_attach_controller" 00:09:33.009 } 00:09:33.009 EOF 00:09:33.009 )") 00:09:33.009 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:09:33.009 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:09:33.009 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:09:33.009 10:27:13 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:09:33.009 "params": { 00:09:33.009 "name": "Nvme1", 00:09:33.009 "trtype": "tcp", 00:09:33.009 "traddr": "10.0.0.2", 00:09:33.009 "adrfam": "ipv4", 00:09:33.009 "trsvcid": "4420", 00:09:33.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:33.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:33.009 "hdgst": false, 00:09:33.009 "ddgst": false 00:09:33.009 }, 00:09:33.009 "method": "bdev_nvme_attach_controller" 00:09:33.009 }' 00:09:33.009 [2024-11-20 10:27:13.573513] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:09:33.009 [2024-11-20 10:27:13.573554] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3112435 ] 00:09:33.009 [2024-11-20 10:27:13.649166] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:09:33.009 [2024-11-20 10:27:13.693004] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:33.009 [2024-11-20 10:27:13.693110] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:33.009 [2024-11-20 10:27:13.693111] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:09:33.266 I/O targets: 00:09:33.266 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:09:33.266 00:09:33.266 00:09:33.266 CUnit - A unit testing framework for C - Version 2.1-3 00:09:33.266 http://cunit.sourceforge.net/ 00:09:33.266 00:09:33.266 00:09:33.266 Suite: bdevio tests on: Nvme1n1 00:09:33.266 Test: blockdev write read block ...passed 00:09:33.266 Test: blockdev write zeroes read block ...passed 00:09:33.266 Test: blockdev write zeroes read no split ...passed 00:09:33.266 Test: blockdev write zeroes read split ...passed 00:09:33.266 Test: blockdev write zeroes read split partial ...passed 00:09:33.266 Test: blockdev reset ...[2024-11-20 10:27:13.965535] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:09:33.266 [2024-11-20 10:27:13.965601] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a33340 (9): Bad file descriptor 00:09:33.522 [2024-11-20 10:27:14.018454] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:09:33.522 passed 00:09:33.522 Test: blockdev write read 8 blocks ...passed 00:09:33.522 Test: blockdev write read size > 128k ...passed 00:09:33.522 Test: blockdev write read invalid size ...passed 00:09:33.522 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:09:33.522 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:09:33.522 Test: blockdev write read max offset ...passed 00:09:33.522 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:09:33.522 Test: blockdev writev readv 8 blocks ...passed 00:09:33.522 Test: blockdev writev readv 30 x 1block ...passed 00:09:33.522 Test: blockdev writev readv block ...passed 00:09:33.522 Test: blockdev writev readv size > 128k ...passed 00:09:33.522 Test: blockdev writev readv size > 128k in two iovs ...passed 00:09:33.522 Test: blockdev comparev and writev ...[2024-11-20 10:27:14.187835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.522 [2024-11-20 10:27:14.187865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:09:33.522 [2024-11-20 10:27:14.187879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.522 [2024-11-20 10:27:14.187887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:09:33.522 [2024-11-20 10:27:14.188137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.522 [2024-11-20 10:27:14.188147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:09:33.522 [2024-11-20 10:27:14.188159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.522 [2024-11-20 10:27:14.188166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:09:33.522 [2024-11-20 10:27:14.188404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.522 [2024-11-20 10:27:14.188415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:09:33.522 [2024-11-20 10:27:14.188426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.522 [2024-11-20 10:27:14.188433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:09:33.522 [2024-11-20 10:27:14.188643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.522 [2024-11-20 10:27:14.188653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:09:33.522 [2024-11-20 10:27:14.188664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:09:33.522 [2024-11-20 10:27:14.188672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:09:33.522 passed 00:09:33.781 Test: blockdev nvme passthru rw ...passed 00:09:33.781 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:27:14.271579] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:33.781 [2024-11-20 10:27:14.271602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:09:33.781 [2024-11-20 10:27:14.271707] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:33.781 [2024-11-20 10:27:14.271717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:09:33.781 [2024-11-20 10:27:14.271814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:33.781 [2024-11-20 10:27:14.271823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:09:33.781 [2024-11-20 10:27:14.271924] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:09:33.781 [2024-11-20 10:27:14.271933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:09:33.781 passed 00:09:33.781 Test: blockdev nvme admin passthru ...passed 00:09:33.781 Test: blockdev copy ...passed 00:09:33.781 00:09:33.781 Run Summary: Type Total Ran Passed Failed Inactive 00:09:33.781 suites 1 1 n/a 0 0 00:09:33.781 tests 23 23 23 0 0 00:09:33.781 asserts 152 152 152 0 n/a 00:09:33.781 00:09:33.781 Elapsed time = 0.963 seconds 00:09:33.781 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:33.781 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:33.781 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:33.781 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:33.781 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:09:33.781 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:09:33.781 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:09:33.781 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:09:33.781 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:09:33.781 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:09:33.781 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:09:33.781 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:09:33.781 rmmod nvme_tcp 00:09:33.781 rmmod nvme_fabrics 00:09:34.039 rmmod nvme_keyring 00:09:34.039 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:09:34.039 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:09:34.039 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:09:34.039 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 3112231 ']' 00:09:34.039 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 3112231 00:09:34.039 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3112231 ']' 00:09:34.039 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3112231 00:09:34.039 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:09:34.039 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:34.039 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3112231 00:09:34.039 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:09:34.039 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:09:34.039 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3112231' 00:09:34.039 killing process with pid 3112231 00:09:34.039 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3112231 00:09:34.039 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3112231 00:09:34.039 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:09:34.039 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:09:34.297 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@264 -- # local dev 00:09:34.297 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@267 -- # remove_target_ns 00:09:34.297 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:34.297 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:34.297 10:27:14 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@268 -- # delete_main_bridge 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@130 -- # return 0 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/setup.sh@284 -- # iptr 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@542 -- # iptables-save 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@542 -- # iptables-restore 00:09:36.202 00:09:36.202 real 0m10.665s 00:09:36.202 user 0m11.993s 00:09:36.202 sys 0m5.095s 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:09:36.202 ************************************ 00:09:36.202 END TEST nvmf_bdevio 00:09:36.202 ************************************ 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # [[ tcp == \t\c\p ]] 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # [[ phy != phy ]] 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:36.202 ************************************ 00:09:36.202 START TEST nvmf_zcopy 00:09:36.202 ************************************ 00:09:36.202 10:27:16 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:36.461 * Looking for test storage... 00:09:36.461 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:09:36.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.461 --rc genhtml_branch_coverage=1 00:09:36.461 --rc genhtml_function_coverage=1 00:09:36.461 --rc genhtml_legend=1 00:09:36.461 --rc geninfo_all_blocks=1 00:09:36.461 --rc geninfo_unexecuted_blocks=1 00:09:36.461 00:09:36.461 ' 00:09:36.461 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:09:36.461 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.462 --rc genhtml_branch_coverage=1 00:09:36.462 --rc genhtml_function_coverage=1 00:09:36.462 --rc genhtml_legend=1 00:09:36.462 --rc geninfo_all_blocks=1 00:09:36.462 --rc geninfo_unexecuted_blocks=1 00:09:36.462 00:09:36.462 ' 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:09:36.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.462 --rc genhtml_branch_coverage=1 00:09:36.462 --rc genhtml_function_coverage=1 00:09:36.462 --rc genhtml_legend=1 00:09:36.462 --rc geninfo_all_blocks=1 00:09:36.462 --rc geninfo_unexecuted_blocks=1 00:09:36.462 00:09:36.462 ' 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:09:36.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:36.462 --rc genhtml_branch_coverage=1 00:09:36.462 --rc genhtml_function_coverage=1 00:09:36.462 --rc genhtml_legend=1 00:09:36.462 --rc geninfo_all_blocks=1 00:09:36.462 --rc geninfo_unexecuted_blocks=1 00:09:36.462 00:09:36.462 ' 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:09:36.462 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # xtrace_disable 00:09:36.462 10:27:17 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@131 -- # pci_devs=() 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@131 -- # local -a pci_devs 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@132 -- # pci_net_devs=() 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@133 -- # pci_drivers=() 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@133 -- # local -A pci_drivers 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@135 -- # net_devs=() 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@135 -- # local -ga net_devs 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@136 -- # e810=() 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@136 -- # local -ga e810 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@137 -- # x722=() 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@137 -- # local -ga x722 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@138 -- # mlx=() 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@138 -- # local -ga mlx 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:43.032 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:43.032 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:43.033 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:43.033 Found net devices under 0000:86:00.0: cvl_0_0 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:43.033 Found net devices under 0000:86:00.1: cvl_0_1 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@262 -- # is_hw=yes 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@257 -- # create_target_ns 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:09:43.033 10.0.0.1 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:09:43.033 10.0.0.2 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:09:43.033 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:09:43.034 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:43.034 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:43.034 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:09:43.034 10:27:22 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 1 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator0 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:09:43.034 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:43.034 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:09:43.034 00:09:43.034 --- 10.0.0.1 ping statistics --- 00:09:43.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.034 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target0 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:09:43.034 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:43.034 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:09:43.034 00:09:43.034 --- 10.0.0.2 ping statistics --- 00:09:43.034 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:43.034 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair++ )) 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # return 0 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator0 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator1 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # return 1 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev= 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@169 -- # return 0 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:43.034 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target0 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target0 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target1 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target1 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@109 -- # return 1 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev= 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@169 -- # return 0 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=3116218 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 3116218 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3116218 ']' 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:43.035 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.035 [2024-11-20 10:27:23.269534] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:09:43.035 [2024-11-20 10:27:23.269585] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:43.035 [2024-11-20 10:27:23.349537] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.035 [2024-11-20 10:27:23.390305] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:43.035 [2024-11-20 10:27:23.390342] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:43.035 [2024-11-20 10:27:23.390349] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:43.035 [2024-11-20 10:27:23.390355] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:43.035 [2024-11-20 10:27:23.390360] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:43.035 [2024-11-20 10:27:23.390911] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.035 [2024-11-20 10:27:23.529945] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@20 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.035 [2024-11-20 10:27:23.550125] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.035 malloc0 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@28 -- # gen_nvmf_target_json 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:09:43.035 { 00:09:43.035 "params": { 00:09:43.035 "name": "Nvme$subsystem", 00:09:43.035 "trtype": "$TEST_TRANSPORT", 00:09:43.035 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:43.035 "adrfam": "ipv4", 00:09:43.035 "trsvcid": "$NVMF_PORT", 00:09:43.035 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:43.035 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:43.035 "hdgst": ${hdgst:-false}, 00:09:43.035 "ddgst": ${ddgst:-false} 00:09:43.035 }, 00:09:43.035 "method": "bdev_nvme_attach_controller" 00:09:43.035 } 00:09:43.035 EOF 00:09:43.035 )") 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:09:43.035 10:27:23 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:09:43.035 "params": { 00:09:43.035 "name": "Nvme1", 00:09:43.035 "trtype": "tcp", 00:09:43.035 "traddr": "10.0.0.2", 00:09:43.035 "adrfam": "ipv4", 00:09:43.035 "trsvcid": "4420", 00:09:43.035 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:43.036 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:43.036 "hdgst": false, 00:09:43.036 "ddgst": false 00:09:43.036 }, 00:09:43.036 "method": "bdev_nvme_attach_controller" 00:09:43.036 }' 00:09:43.036 [2024-11-20 10:27:23.631984] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:09:43.036 [2024-11-20 10:27:23.632027] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3116245 ] 00:09:43.036 [2024-11-20 10:27:23.706627] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:43.036 [2024-11-20 10:27:23.747714] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:43.295 Running I/O for 10 seconds... 00:09:45.604 8628.00 IOPS, 67.41 MiB/s [2024-11-20T09:27:27.269Z] 8682.00 IOPS, 67.83 MiB/s [2024-11-20T09:27:28.201Z] 8678.67 IOPS, 67.80 MiB/s [2024-11-20T09:27:29.135Z] 8708.75 IOPS, 68.04 MiB/s [2024-11-20T09:27:30.074Z] 8716.20 IOPS, 68.10 MiB/s [2024-11-20T09:27:31.010Z] 8726.50 IOPS, 68.18 MiB/s [2024-11-20T09:27:31.945Z] 8706.00 IOPS, 68.02 MiB/s [2024-11-20T09:27:33.321Z] 8716.12 IOPS, 68.09 MiB/s [2024-11-20T09:27:34.256Z] 8724.67 IOPS, 68.16 MiB/s [2024-11-20T09:27:34.256Z] 8727.70 IOPS, 68.19 MiB/s 00:09:53.525 Latency(us) 00:09:53.525 [2024-11-20T09:27:34.256Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:53.525 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:09:53.525 Verification LBA range: start 0x0 length 0x1000 00:09:53.525 Nvme1n1 : 10.01 8731.44 68.21 0.00 0.00 14618.73 238.93 23842.62 00:09:53.525 [2024-11-20T09:27:34.256Z] =================================================================================================================== 00:09:53.525 [2024-11-20T09:27:34.256Z] Total : 8731.44 68.21 0.00 0.00 14618.73 238.93 23842.62 00:09:53.525 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@34 -- # perfpid=3118076 00:09:53.525 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@36 -- # xtrace_disable 00:09:53.525 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.525 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@32 -- # gen_nvmf_target_json 00:09:53.525 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:09:53.525 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:09:53.525 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:09:53.525 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:09:53.525 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:09:53.525 { 00:09:53.525 "params": { 00:09:53.525 "name": "Nvme$subsystem", 00:09:53.525 "trtype": "$TEST_TRANSPORT", 00:09:53.525 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.525 "adrfam": "ipv4", 00:09:53.525 "trsvcid": "$NVMF_PORT", 00:09:53.525 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.525 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.525 "hdgst": ${hdgst:-false}, 00:09:53.525 "ddgst": ${ddgst:-false} 00:09:53.525 }, 00:09:53.525 "method": "bdev_nvme_attach_controller" 00:09:53.525 } 00:09:53.525 EOF 00:09:53.525 )") 00:09:53.525 [2024-11-20 10:27:34.101290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-20 10:27:34.101321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:09:53.525 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:09:53.525 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:09:53.525 10:27:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:09:53.525 "params": { 00:09:53.525 "name": "Nvme1", 00:09:53.525 "trtype": "tcp", 00:09:53.525 "traddr": "10.0.0.2", 00:09:53.525 "adrfam": "ipv4", 00:09:53.525 "trsvcid": "4420", 00:09:53.525 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.525 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.525 "hdgst": false, 00:09:53.525 "ddgst": false 00:09:53.525 }, 00:09:53.525 "method": "bdev_nvme_attach_controller" 00:09:53.525 }' 00:09:53.525 [2024-11-20 10:27:34.113292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-20 10:27:34.113307] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-20 10:27:34.125319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-20 10:27:34.125343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-20 10:27:34.137352] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-20 10:27:34.137364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-20 10:27:34.145152] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:09:53.525 [2024-11-20 10:27:34.145193] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3118076 ] 00:09:53.525 [2024-11-20 10:27:34.149383] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-20 10:27:34.149394] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-20 10:27:34.161413] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-20 10:27:34.161423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-20 10:27:34.173448] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-20 10:27:34.173459] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-20 10:27:34.185478] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-20 10:27:34.185489] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-20 10:27:34.197508] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-20 10:27:34.197518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-20 10:27:34.209541] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-20 10:27:34.209551] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-20 10:27:34.220021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.525 [2024-11-20 10:27:34.221571] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-20 10:27:34.221580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-20 10:27:34.233609] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-20 10:27:34.233624] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.525 [2024-11-20 10:27:34.245637] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.525 [2024-11-20 10:27:34.245647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.784 [2024-11-20 10:27:34.257667] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.784 [2024-11-20 10:27:34.257679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.784 [2024-11-20 10:27:34.261907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:09:53.784 [2024-11-20 10:27:34.269701] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.784 [2024-11-20 10:27:34.269713] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.784 [2024-11-20 10:27:34.281750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.784 [2024-11-20 10:27:34.281770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.784 [2024-11-20 10:27:34.293781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.784 [2024-11-20 10:27:34.293799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.784 [2024-11-20 10:27:34.305809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.784 [2024-11-20 10:27:34.305822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.784 [2024-11-20 10:27:34.317837] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.784 [2024-11-20 10:27:34.317849] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.784 [2024-11-20 10:27:34.329868] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.784 [2024-11-20 10:27:34.329879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.785 [2024-11-20 10:27:34.341899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.785 [2024-11-20 10:27:34.341910] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.785 [2024-11-20 10:27:34.353945] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.785 [2024-11-20 10:27:34.353970] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.785 [2024-11-20 10:27:34.365974] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.785 [2024-11-20 10:27:34.365989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.785 [2024-11-20 10:27:34.378003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.785 [2024-11-20 10:27:34.378018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.785 [2024-11-20 10:27:34.390028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.785 [2024-11-20 10:27:34.390038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.785 [2024-11-20 10:27:34.402059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.785 [2024-11-20 10:27:34.402069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.785 [2024-11-20 10:27:34.414097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.785 [2024-11-20 10:27:34.414110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.785 [2024-11-20 10:27:34.426129] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.785 [2024-11-20 10:27:34.426144] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.785 [2024-11-20 10:27:34.438162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.785 [2024-11-20 10:27:34.438176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.785 [2024-11-20 10:27:34.450199] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.785 [2024-11-20 10:27:34.450220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.785 Running I/O for 5 seconds... 00:09:53.785 [2024-11-20 10:27:34.462222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.785 [2024-11-20 10:27:34.462233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.785 [2024-11-20 10:27:34.477537] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.785 [2024-11-20 10:27:34.477559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.785 [2024-11-20 10:27:34.492045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.785 [2024-11-20 10:27:34.492065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:53.785 [2024-11-20 10:27:34.505702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:53.785 [2024-11-20 10:27:34.505723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.043 [2024-11-20 10:27:34.519986] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.043 [2024-11-20 10:27:34.520005] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.043 [2024-11-20 10:27:34.533660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.044 [2024-11-20 10:27:34.533679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.044 [2024-11-20 10:27:34.542394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.044 [2024-11-20 10:27:34.542413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.044 [2024-11-20 10:27:34.556376] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.044 [2024-11-20 10:27:34.556396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.044 [2024-11-20 10:27:34.569368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.044 [2024-11-20 10:27:34.569387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.044 [2024-11-20 10:27:34.578707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.044 [2024-11-20 10:27:34.578727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.044 [2024-11-20 10:27:34.592858] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.044 [2024-11-20 10:27:34.592878] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.044 [2024-11-20 10:27:34.606990] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.044 [2024-11-20 10:27:34.607010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.044 [2024-11-20 10:27:34.618085] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.044 [2024-11-20 10:27:34.618105] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.044 [2024-11-20 10:27:34.627587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.044 [2024-11-20 10:27:34.627606] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.044 [2024-11-20 10:27:34.641789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.044 [2024-11-20 10:27:34.641809] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.044 [2024-11-20 10:27:34.655298] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.044 [2024-11-20 10:27:34.655317] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.044 [2024-11-20 10:27:34.669116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.044 [2024-11-20 10:27:34.669135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.044 [2024-11-20 10:27:34.682767] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.044 [2024-11-20 10:27:34.682787] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.044 [2024-11-20 10:27:34.696792] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.044 [2024-11-20 10:27:34.696813] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.044 [2024-11-20 10:27:34.710168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.044 [2024-11-20 10:27:34.710192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.044 [2024-11-20 10:27:34.723867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.044 [2024-11-20 10:27:34.723888] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.044 [2024-11-20 10:27:34.737583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.044 [2024-11-20 10:27:34.737604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.044 [2024-11-20 10:27:34.751386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.044 [2024-11-20 10:27:34.751419] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.044 [2024-11-20 10:27:34.765394] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.044 [2024-11-20 10:27:34.765413] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.303 [2024-11-20 10:27:34.778826] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.303 [2024-11-20 10:27:34.778846] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.303 [2024-11-20 10:27:34.793188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.303 [2024-11-20 10:27:34.793215] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.303 [2024-11-20 10:27:34.808674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.303 [2024-11-20 10:27:34.808694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.303 [2024-11-20 10:27:34.822619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.303 [2024-11-20 10:27:34.822639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.303 [2024-11-20 10:27:34.836049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.303 [2024-11-20 10:27:34.836070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.303 [2024-11-20 10:27:34.849182] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.303 [2024-11-20 10:27:34.849209] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.303 [2024-11-20 10:27:34.863131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.303 [2024-11-20 10:27:34.863150] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.303 [2024-11-20 10:27:34.876996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.303 [2024-11-20 10:27:34.877017] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.303 [2024-11-20 10:27:34.891361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.303 [2024-11-20 10:27:34.891381] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.303 [2024-11-20 10:27:34.907073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.303 [2024-11-20 10:27:34.907095] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.303 [2024-11-20 10:27:34.920801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.303 [2024-11-20 10:27:34.920822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.303 [2024-11-20 10:27:34.934496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.303 [2024-11-20 10:27:34.934516] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.303 [2024-11-20 10:27:34.948269] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.303 [2024-11-20 10:27:34.948288] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.303 [2024-11-20 10:27:34.961705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.303 [2024-11-20 10:27:34.961726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.303 [2024-11-20 10:27:34.975089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.303 [2024-11-20 10:27:34.975109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.303 [2024-11-20 10:27:34.988972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.303 [2024-11-20 10:27:34.988991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.303 [2024-11-20 10:27:35.002647] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.303 [2024-11-20 10:27:35.002667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.303 [2024-11-20 10:27:35.016660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.303 [2024-11-20 10:27:35.016681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.303 [2024-11-20 10:27:35.030229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.303 [2024-11-20 10:27:35.030251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.562 [2024-11-20 10:27:35.043866] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.562 [2024-11-20 10:27:35.043886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.562 [2024-11-20 10:27:35.057694] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.562 [2024-11-20 10:27:35.057714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.562 [2024-11-20 10:27:35.071527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.562 [2024-11-20 10:27:35.071547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.562 [2024-11-20 10:27:35.080386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.562 [2024-11-20 10:27:35.080405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.562 [2024-11-20 10:27:35.094604] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.562 [2024-11-20 10:27:35.094623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.562 [2024-11-20 10:27:35.107820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.562 [2024-11-20 10:27:35.107840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.562 [2024-11-20 10:27:35.121876] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.562 [2024-11-20 10:27:35.121896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.562 [2024-11-20 10:27:35.135592] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.562 [2024-11-20 10:27:35.135612] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.562 [2024-11-20 10:27:35.144465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.562 [2024-11-20 10:27:35.144485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.562 [2024-11-20 10:27:35.158356] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.562 [2024-11-20 10:27:35.158375] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.562 [2024-11-20 10:27:35.171498] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.562 [2024-11-20 10:27:35.171518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.562 [2024-11-20 10:27:35.181020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.562 [2024-11-20 10:27:35.181039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.562 [2024-11-20 10:27:35.194756] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.562 [2024-11-20 10:27:35.194777] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.562 [2024-11-20 10:27:35.208381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.562 [2024-11-20 10:27:35.208399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.562 [2024-11-20 10:27:35.221812] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.562 [2024-11-20 10:27:35.221831] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.562 [2024-11-20 10:27:35.235705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.563 [2024-11-20 10:27:35.235724] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.563 [2024-11-20 10:27:35.249595] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.563 [2024-11-20 10:27:35.249618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.563 [2024-11-20 10:27:35.263319] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.563 [2024-11-20 10:27:35.263338] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.563 [2024-11-20 10:27:35.276816] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.563 [2024-11-20 10:27:35.276835] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 [2024-11-20 10:27:35.291110] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.291129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 [2024-11-20 10:27:35.302615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.302634] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 [2024-11-20 10:27:35.316460] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.316479] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 [2024-11-20 10:27:35.330021] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.330040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 [2024-11-20 10:27:35.344004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.344023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 [2024-11-20 10:27:35.357425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.357444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 [2024-11-20 10:27:35.366666] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.366685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 [2024-11-20 10:27:35.376057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.376077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 [2024-11-20 10:27:35.390190] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.390214] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 [2024-11-20 10:27:35.403284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.403303] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 [2024-11-20 10:27:35.417278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.417298] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 [2024-11-20 10:27:35.431028] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.431047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 [2024-11-20 10:27:35.444809] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.444829] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 [2024-11-20 10:27:35.458188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.458212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 16933.00 IOPS, 132.29 MiB/s [2024-11-20T09:27:35.552Z] [2024-11-20 10:27:35.471621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.471640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 [2024-11-20 10:27:35.485257] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.485276] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 [2024-11-20 10:27:35.498829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.498848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 [2024-11-20 10:27:35.512482] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.512501] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 [2024-11-20 10:27:35.526046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.526065] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:54.821 [2024-11-20 10:27:35.539807] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:54.821 [2024-11-20 10:27:35.539826] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.078 [2024-11-20 10:27:35.553885] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.078 [2024-11-20 10:27:35.553904] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.078 [2024-11-20 10:27:35.567282] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.078 [2024-11-20 10:27:35.567301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.078 [2024-11-20 10:27:35.580867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.078 [2024-11-20 10:27:35.580886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.078 [2024-11-20 10:27:35.594615] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.078 [2024-11-20 10:27:35.594639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.078 [2024-11-20 10:27:35.608214] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.078 [2024-11-20 10:27:35.608234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.078 [2024-11-20 10:27:35.621877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.078 [2024-11-20 10:27:35.621896] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.078 [2024-11-20 10:27:35.635850] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.078 [2024-11-20 10:27:35.635869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.078 [2024-11-20 10:27:35.649940] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.078 [2024-11-20 10:27:35.649960] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.078 [2024-11-20 10:27:35.663735] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.078 [2024-11-20 10:27:35.663755] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.078 [2024-11-20 10:27:35.677608] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.078 [2024-11-20 10:27:35.677628] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.078 [2024-11-20 10:27:35.687041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.078 [2024-11-20 10:27:35.687060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.078 [2024-11-20 10:27:35.701365] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.078 [2024-11-20 10:27:35.701385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.078 [2024-11-20 10:27:35.714688] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.078 [2024-11-20 10:27:35.714707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.078 [2024-11-20 10:27:35.728529] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.078 [2024-11-20 10:27:35.728547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.078 [2024-11-20 10:27:35.742354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.078 [2024-11-20 10:27:35.742373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.078 [2024-11-20 10:27:35.751932] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.078 [2024-11-20 10:27:35.751952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.078 [2024-11-20 10:27:35.765959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.078 [2024-11-20 10:27:35.765980] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.078 [2024-11-20 10:27:35.779657] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.078 [2024-11-20 10:27:35.779677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.078 [2024-11-20 10:27:35.793438] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.078 [2024-11-20 10:27:35.793457] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.337 [2024-11-20 10:27:35.807302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.337 [2024-11-20 10:27:35.807322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.337 [2024-11-20 10:27:35.821278] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.337 [2024-11-20 10:27:35.821297] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.337 [2024-11-20 10:27:35.834991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.337 [2024-11-20 10:27:35.835011] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.337 [2024-11-20 10:27:35.848736] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.337 [2024-11-20 10:27:35.848759] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.337 [2024-11-20 10:27:35.861884] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.337 [2024-11-20 10:27:35.861903] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.337 [2024-11-20 10:27:35.875842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.338 [2024-11-20 10:27:35.875862] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.338 [2024-11-20 10:27:35.889511] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.338 [2024-11-20 10:27:35.889530] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.338 [2024-11-20 10:27:35.903496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.338 [2024-11-20 10:27:35.903514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.338 [2024-11-20 10:27:35.917255] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.338 [2024-11-20 10:27:35.917274] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.338 [2024-11-20 10:27:35.930893] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.338 [2024-11-20 10:27:35.930912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.338 [2024-11-20 10:27:35.944961] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.338 [2024-11-20 10:27:35.944981] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.338 [2024-11-20 10:27:35.958208] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.338 [2024-11-20 10:27:35.958227] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.338 [2024-11-20 10:27:35.971710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.338 [2024-11-20 10:27:35.971729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.338 [2024-11-20 10:27:35.985262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.338 [2024-11-20 10:27:35.985281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.338 [2024-11-20 10:27:35.999020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.338 [2024-11-20 10:27:35.999039] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.338 [2024-11-20 10:27:36.012499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.338 [2024-11-20 10:27:36.012519] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.338 [2024-11-20 10:27:36.026197] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.338 [2024-11-20 10:27:36.026220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.338 [2024-11-20 10:27:36.039855] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.338 [2024-11-20 10:27:36.039874] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.338 [2024-11-20 10:27:36.053446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.338 [2024-11-20 10:27:36.053465] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.597 [2024-11-20 10:27:36.067330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.597 [2024-11-20 10:27:36.067349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.597 [2024-11-20 10:27:36.081134] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.597 [2024-11-20 10:27:36.081155] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.597 [2024-11-20 10:27:36.094661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.597 [2024-11-20 10:27:36.094681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.597 [2024-11-20 10:27:36.108490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.597 [2024-11-20 10:27:36.108515] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.597 [2024-11-20 10:27:36.122454] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.597 [2024-11-20 10:27:36.122476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.597 [2024-11-20 10:27:36.136266] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.597 [2024-11-20 10:27:36.136287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.597 [2024-11-20 10:27:36.150089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.597 [2024-11-20 10:27:36.150109] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.597 [2024-11-20 10:27:36.163670] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.597 [2024-11-20 10:27:36.163691] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.597 [2024-11-20 10:27:36.172566] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.597 [2024-11-20 10:27:36.172585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.597 [2024-11-20 10:27:36.186682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.597 [2024-11-20 10:27:36.186702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.597 [2024-11-20 10:27:36.200428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.597 [2024-11-20 10:27:36.200448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.597 [2024-11-20 10:27:36.214139] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.597 [2024-11-20 10:27:36.214159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.597 [2024-11-20 10:27:36.227655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.597 [2024-11-20 10:27:36.227675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.597 [2024-11-20 10:27:36.241518] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.597 [2024-11-20 10:27:36.241538] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.597 [2024-11-20 10:27:36.254956] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.597 [2024-11-20 10:27:36.254976] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.597 [2024-11-20 10:27:36.268447] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.597 [2024-11-20 10:27:36.268466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.597 [2024-11-20 10:27:36.282665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.597 [2024-11-20 10:27:36.282684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.597 [2024-11-20 10:27:36.297669] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.597 [2024-11-20 10:27:36.297689] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.597 [2024-11-20 10:27:36.311859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.597 [2024-11-20 10:27:36.311879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.326314] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.326334] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.342260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.342281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.356300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.356320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.367168] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.367189] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.376415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.376436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.385976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.385995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.400363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.400383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.414256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.414277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.427743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.427765] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.441570] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.441591] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.455504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.455524] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 16999.50 IOPS, 132.81 MiB/s [2024-11-20T09:27:36.587Z] [2024-11-20 10:27:36.467018] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.467038] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.481064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.481083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.490103] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.490122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.504229] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.504248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.517972] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.517991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.531167] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.531186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.544910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.544929] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.558600] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.558619] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.572551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.856 [2024-11-20 10:27:36.572570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:55.856 [2024-11-20 10:27:36.581494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:55.857 [2024-11-20 10:27:36.581513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.115 [2024-11-20 10:27:36.595648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.115 [2024-11-20 10:27:36.595667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.115 [2024-11-20 10:27:36.605000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.115 [2024-11-20 10:27:36.605019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.115 [2024-11-20 10:27:36.619097] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.115 [2024-11-20 10:27:36.619117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.115 [2024-11-20 10:27:36.632221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.115 [2024-11-20 10:27:36.632241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.115 [2024-11-20 10:27:36.646008] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.115 [2024-11-20 10:27:36.646027] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.115 [2024-11-20 10:27:36.659602] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.115 [2024-11-20 10:27:36.659620] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.115 [2024-11-20 10:27:36.673516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.115 [2024-11-20 10:27:36.673536] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.115 [2024-11-20 10:27:36.688004] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.115 [2024-11-20 10:27:36.688024] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.115 [2024-11-20 10:27:36.698729] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.115 [2024-11-20 10:27:36.698748] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.115 [2024-11-20 10:27:36.712859] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.115 [2024-11-20 10:27:36.712879] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.115 [2024-11-20 10:27:36.726419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.115 [2024-11-20 10:27:36.726440] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.115 [2024-11-20 10:27:36.740074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.115 [2024-11-20 10:27:36.740093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.115 [2024-11-20 10:27:36.753364] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.115 [2024-11-20 10:27:36.753383] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.115 [2024-11-20 10:27:36.767346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.115 [2024-11-20 10:27:36.767365] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.115 [2024-11-20 10:27:36.780674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.115 [2024-11-20 10:27:36.780694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.115 [2024-11-20 10:27:36.794293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.115 [2024-11-20 10:27:36.794314] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.115 [2024-11-20 10:27:36.807912] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.115 [2024-11-20 10:27:36.807931] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.115 [2024-11-20 10:27:36.821242] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.115 [2024-11-20 10:27:36.821262] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.115 [2024-11-20 10:27:36.835041] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.115 [2024-11-20 10:27:36.835061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.375 [2024-11-20 10:27:36.848996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.375 [2024-11-20 10:27:36.849016] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.375 [2024-11-20 10:27:36.862632] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.375 [2024-11-20 10:27:36.862652] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.375 [2024-11-20 10:27:36.875927] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.375 [2024-11-20 10:27:36.875946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.375 [2024-11-20 10:27:36.889713] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.375 [2024-11-20 10:27:36.889732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.375 [2024-11-20 10:27:36.903491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.375 [2024-11-20 10:27:36.903509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.375 [2024-11-20 10:27:36.917261] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.375 [2024-11-20 10:27:36.917280] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.375 [2024-11-20 10:27:36.930717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.375 [2024-11-20 10:27:36.930737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.375 [2024-11-20 10:27:36.944818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.375 [2024-11-20 10:27:36.944838] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.375 [2024-11-20 10:27:36.958425] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.375 [2024-11-20 10:27:36.958444] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.375 [2024-11-20 10:27:36.972098] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.375 [2024-11-20 10:27:36.972117] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.375 [2024-11-20 10:27:36.985924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.375 [2024-11-20 10:27:36.985944] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.375 [2024-11-20 10:27:36.999527] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.375 [2024-11-20 10:27:36.999547] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.375 [2024-11-20 10:27:37.013046] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.375 [2024-11-20 10:27:37.013066] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.375 [2024-11-20 10:27:37.026815] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.375 [2024-11-20 10:27:37.026834] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.375 [2024-11-20 10:27:37.040642] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.375 [2024-11-20 10:27:37.040661] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.375 [2024-11-20 10:27:37.054221] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.375 [2024-11-20 10:27:37.054239] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.375 [2024-11-20 10:27:37.067875] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.375 [2024-11-20 10:27:37.067895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.375 [2024-11-20 10:27:37.081453] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.375 [2024-11-20 10:27:37.081473] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.375 [2024-11-20 10:27:37.095306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.375 [2024-11-20 10:27:37.095326] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.109315] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.109339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.123209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.123228] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.136715] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.136734] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.150495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.150514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.164366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.164385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.178181] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.178200] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.192634] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.192654] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.203557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.203576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.217294] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.217313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.231340] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.231359] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.245415] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.245434] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.259256] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.259275] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.273237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.273256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.287020] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.287040] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.300573] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.300593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.314404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.314423] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.323577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.323596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.337695] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.337714] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.351171] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.351191] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.634 [2024-11-20 10:27:37.360233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.634 [2024-11-20 10:27:37.360256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.893 [2024-11-20 10:27:37.374239] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.893 [2024-11-20 10:27:37.374259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.893 [2024-11-20 10:27:37.387942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.893 [2024-11-20 10:27:37.387961] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.893 [2024-11-20 10:27:37.401920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.893 [2024-11-20 10:27:37.401939] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.893 [2024-11-20 10:27:37.415651] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.893 [2024-11-20 10:27:37.415672] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.893 [2024-11-20 10:27:37.429252] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.893 [2024-11-20 10:27:37.429271] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.893 [2024-11-20 10:27:37.443220] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.893 [2024-11-20 10:27:37.443240] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.893 [2024-11-20 10:27:37.456697] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.893 [2024-11-20 10:27:37.456717] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.893 17026.33 IOPS, 133.02 MiB/s [2024-11-20T09:27:37.624Z] [2024-11-20 10:27:37.470079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.893 [2024-11-20 10:27:37.470100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.893 [2024-11-20 10:27:37.483674] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.893 [2024-11-20 10:27:37.483693] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.893 [2024-11-20 10:27:37.497567] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.893 [2024-11-20 10:27:37.497589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.893 [2024-11-20 10:27:37.510909] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.893 [2024-11-20 10:27:37.510930] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.893 [2024-11-20 10:27:37.525192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.893 [2024-11-20 10:27:37.525219] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.893 [2024-11-20 10:27:37.535746] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.893 [2024-11-20 10:27:37.535766] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.893 [2024-11-20 10:27:37.550093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.893 [2024-11-20 10:27:37.550113] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.893 [2024-11-20 10:27:37.563682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.894 [2024-11-20 10:27:37.563701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.894 [2024-11-20 10:27:37.573040] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.894 [2024-11-20 10:27:37.573060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.894 [2024-11-20 10:27:37.587109] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.894 [2024-11-20 10:27:37.587129] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.894 [2024-11-20 10:27:37.600584] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.894 [2024-11-20 10:27:37.600604] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:56.894 [2024-11-20 10:27:37.614557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:56.894 [2024-11-20 10:27:37.614581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.153 [2024-11-20 10:27:37.628473] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.153 [2024-11-20 10:27:37.628492] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.153 [2024-11-20 10:27:37.642128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.153 [2024-11-20 10:27:37.642148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.153 [2024-11-20 10:27:37.655899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.153 [2024-11-20 10:27:37.655919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.153 [2024-11-20 10:27:37.670003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.153 [2024-11-20 10:27:37.670023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.153 [2024-11-20 10:27:37.683624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.153 [2024-11-20 10:27:37.683645] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.153 [2024-11-20 10:27:37.697749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.153 [2024-11-20 10:27:37.697770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.153 [2024-11-20 10:27:37.711302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.153 [2024-11-20 10:27:37.711323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.153 [2024-11-20 10:27:37.720401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.153 [2024-11-20 10:27:37.720420] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.153 [2024-11-20 10:27:37.734265] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.153 [2024-11-20 10:27:37.734285] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.153 [2024-11-20 10:27:37.748149] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.153 [2024-11-20 10:27:37.748168] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.153 [2024-11-20 10:27:37.761748] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.153 [2024-11-20 10:27:37.761767] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.153 [2024-11-20 10:27:37.775497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.153 [2024-11-20 10:27:37.775517] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.153 [2024-11-20 10:27:37.789246] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.153 [2024-11-20 10:27:37.789266] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.153 [2024-11-20 10:27:37.803083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.153 [2024-11-20 10:27:37.803103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.153 [2024-11-20 10:27:37.816706] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.153 [2024-11-20 10:27:37.816728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.153 [2024-11-20 10:27:37.830506] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.153 [2024-11-20 10:27:37.830528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.153 [2024-11-20 10:27:37.844435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.153 [2024-11-20 10:27:37.844456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.153 [2024-11-20 10:27:37.858169] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.153 [2024-11-20 10:27:37.858188] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.153 [2024-11-20 10:27:37.871942] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.153 [2024-11-20 10:27:37.871962] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.412 [2024-11-20 10:27:37.885433] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.412 [2024-11-20 10:27:37.885452] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.412 [2024-11-20 10:27:37.899457] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.412 [2024-11-20 10:27:37.899476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.412 [2024-11-20 10:27:37.912569] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.412 [2024-11-20 10:27:37.912589] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.412 [2024-11-20 10:27:37.926734] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.412 [2024-11-20 10:27:37.926754] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.412 [2024-11-20 10:27:37.940520] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.412 [2024-11-20 10:27:37.940539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.412 [2024-11-20 10:27:37.954551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.412 [2024-11-20 10:27:37.954571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.412 [2024-11-20 10:27:37.968494] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.412 [2024-11-20 10:27:37.968513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.412 [2024-11-20 10:27:37.982648] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.412 [2024-11-20 10:27:37.982667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.412 [2024-11-20 10:27:37.992166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.412 [2024-11-20 10:27:37.992185] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.412 [2024-11-20 10:27:38.006016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.412 [2024-11-20 10:27:38.006035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.412 [2024-11-20 10:27:38.019712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.412 [2024-11-20 10:27:38.019731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.412 [2024-11-20 10:27:38.033562] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.412 [2024-11-20 10:27:38.033581] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.412 [2024-11-20 10:27:38.047014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.412 [2024-11-20 10:27:38.047034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.412 [2024-11-20 10:27:38.060963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.412 [2024-11-20 10:27:38.060983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.412 [2024-11-20 10:27:38.074775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.412 [2024-11-20 10:27:38.074795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.412 [2024-11-20 10:27:38.088419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.412 [2024-11-20 10:27:38.088439] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.412 [2024-11-20 10:27:38.101928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.412 [2024-11-20 10:27:38.101948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.412 [2024-11-20 10:27:38.115260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.412 [2024-11-20 10:27:38.115279] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.412 [2024-11-20 10:27:38.128679] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.412 [2024-11-20 10:27:38.128699] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.671 [2024-11-20 10:27:38.143051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.671 [2024-11-20 10:27:38.143071] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.671 [2024-11-20 10:27:38.158717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.671 [2024-11-20 10:27:38.158736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.671 [2024-11-20 10:27:38.173310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.671 [2024-11-20 10:27:38.173330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.671 [2024-11-20 10:27:38.187224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.671 [2024-11-20 10:27:38.187243] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.671 [2024-11-20 10:27:38.200390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.671 [2024-11-20 10:27:38.200409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.671 [2024-11-20 10:27:38.214308] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.671 [2024-11-20 10:27:38.214327] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.671 [2024-11-20 10:27:38.228026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.671 [2024-11-20 10:27:38.228047] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.671 [2024-11-20 10:27:38.241905] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.671 [2024-11-20 10:27:38.241924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.671 [2024-11-20 10:27:38.255621] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.671 [2024-11-20 10:27:38.255640] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.671 [2024-11-20 10:27:38.269582] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.671 [2024-11-20 10:27:38.269601] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.671 [2024-11-20 10:27:38.283717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.671 [2024-11-20 10:27:38.283736] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.671 [2024-11-20 10:27:38.297375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.671 [2024-11-20 10:27:38.297396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.671 [2024-11-20 10:27:38.311232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.671 [2024-11-20 10:27:38.311251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.671 [2024-11-20 10:27:38.325057] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.671 [2024-11-20 10:27:38.325077] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.671 [2024-11-20 10:27:38.338707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.671 [2024-11-20 10:27:38.338727] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.671 [2024-11-20 10:27:38.352386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.671 [2024-11-20 10:27:38.352406] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.671 [2024-11-20 10:27:38.366070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.671 [2024-11-20 10:27:38.366089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.671 [2024-11-20 10:27:38.379828] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.671 [2024-11-20 10:27:38.379847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.671 [2024-11-20 10:27:38.393682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.671 [2024-11-20 10:27:38.393702] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.930 [2024-11-20 10:27:38.407557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.930 [2024-11-20 10:27:38.407576] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.930 [2024-11-20 10:27:38.421287] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.930 [2024-11-20 10:27:38.421306] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.930 [2024-11-20 10:27:38.434976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.930 [2024-11-20 10:27:38.434995] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.930 [2024-11-20 10:27:38.448719] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.930 [2024-11-20 10:27:38.448739] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.930 [2024-11-20 10:27:38.462616] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.930 [2024-11-20 10:27:38.462635] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.930 17041.25 IOPS, 133.13 MiB/s [2024-11-20T09:27:38.661Z] [2024-11-20 10:27:38.476222] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.930 [2024-11-20 10:27:38.476241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.930 [2024-11-20 10:27:38.489764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.930 [2024-11-20 10:27:38.489783] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.930 [2024-11-20 10:27:38.503700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.930 [2024-11-20 10:27:38.503720] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.930 [2024-11-20 10:27:38.517402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.930 [2024-11-20 10:27:38.517422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.930 [2024-11-20 10:27:38.531333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.930 [2024-11-20 10:27:38.531353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.930 [2024-11-20 10:27:38.544834] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.930 [2024-11-20 10:27:38.544853] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.930 [2024-11-20 10:27:38.558524] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.930 [2024-11-20 10:27:38.558543] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.930 [2024-11-20 10:27:38.572379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.930 [2024-11-20 10:27:38.572399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.930 [2024-11-20 10:27:38.585947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.930 [2024-11-20 10:27:38.585966] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.930 [2024-11-20 10:27:38.599829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.930 [2024-11-20 10:27:38.599848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.930 [2024-11-20 10:27:38.613776] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.930 [2024-11-20 10:27:38.613795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.930 [2024-11-20 10:27:38.627386] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.930 [2024-11-20 10:27:38.627405] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.930 [2024-11-20 10:27:38.641317] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.930 [2024-11-20 10:27:38.641341] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:57.930 [2024-11-20 10:27:38.654957] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:57.930 [2024-11-20 10:27:38.654977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-11-20 10:27:38.668784] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-11-20 10:27:38.668802] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-11-20 10:27:38.682502] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-11-20 10:27:38.682522] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-11-20 10:27:38.696348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-11-20 10:27:38.696368] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-11-20 10:27:38.710300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-11-20 10:27:38.710320] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-11-20 10:27:38.723733] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-11-20 10:27:38.723752] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-11-20 10:27:38.737841] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-11-20 10:27:38.737861] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-11-20 10:27:38.751384] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-11-20 10:27:38.751403] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-11-20 10:27:38.765015] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-11-20 10:27:38.765034] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-11-20 10:27:38.778557] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-11-20 10:27:38.778577] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-11-20 10:27:38.791856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-11-20 10:27:38.791875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-11-20 10:27:38.805802] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-11-20 10:27:38.805821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-11-20 10:27:38.819354] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-11-20 10:27:38.819373] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-11-20 10:27:38.833212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-11-20 10:27:38.833233] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-11-20 10:27:38.846926] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-11-20 10:27:38.846947] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-11-20 10:27:38.860260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-11-20 10:27:38.860281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-11-20 10:27:38.874100] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-11-20 10:27:38.874121] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-11-20 10:27:38.887789] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-11-20 10:27:38.887811] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-11-20 10:27:38.901419] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.189 [2024-11-20 10:27:38.901445] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.189 [2024-11-20 10:27:38.915456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.190 [2024-11-20 10:27:38.915476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-11-20 10:27:38.929144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-11-20 10:27:38.929164] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-11-20 10:27:38.942726] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-11-20 10:27:38.942745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-11-20 10:27:38.956499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-11-20 10:27:38.956518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-11-20 10:27:38.970003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-11-20 10:27:38.970023] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-11-20 10:27:38.984116] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-11-20 10:27:38.984136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-11-20 10:27:38.994686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-11-20 10:27:38.994706] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-11-20 10:27:39.008887] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-11-20 10:27:39.008908] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-11-20 10:27:39.022842] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-11-20 10:27:39.022870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-11-20 10:27:39.036079] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-11-20 10:27:39.036099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-11-20 10:27:39.049820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-11-20 10:27:39.049839] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-11-20 10:27:39.063717] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-11-20 10:27:39.063737] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-11-20 10:27:39.077363] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-11-20 10:27:39.077382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-11-20 10:27:39.090751] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-11-20 10:27:39.090770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-11-20 10:27:39.104402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-11-20 10:27:39.104422] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-11-20 10:27:39.117824] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-11-20 10:27:39.117843] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-11-20 10:27:39.131764] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-11-20 10:27:39.131784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-11-20 10:27:39.145316] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-11-20 10:27:39.145335] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-11-20 10:27:39.154700] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-11-20 10:27:39.154725] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.448 [2024-11-20 10:27:39.168820] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.448 [2024-11-20 10:27:39.168840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-11-20 10:27:39.183039] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-11-20 10:27:39.183060] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-11-20 10:27:39.196725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-11-20 10:27:39.196745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-11-20 10:27:39.210417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-11-20 10:27:39.210437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-11-20 10:27:39.224025] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-11-20 10:27:39.224046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-11-20 10:27:39.237708] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-11-20 10:27:39.237728] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-11-20 10:27:39.251696] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-11-20 10:27:39.251716] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-11-20 10:27:39.265890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-11-20 10:27:39.265911] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-11-20 10:27:39.276477] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-11-20 10:27:39.276496] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-11-20 10:27:39.285662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-11-20 10:27:39.285681] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-11-20 10:27:39.299844] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-11-20 10:27:39.299863] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-11-20 10:27:39.313924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-11-20 10:27:39.313943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-11-20 10:27:39.329399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-11-20 10:27:39.329418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-11-20 10:27:39.342874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-11-20 10:27:39.342893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-11-20 10:27:39.356777] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-11-20 10:27:39.356796] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-11-20 10:27:39.370409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-11-20 10:27:39.370428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-11-20 10:27:39.384117] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-11-20 10:27:39.384136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-11-20 10:27:39.397536] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-11-20 10:27:39.397555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-11-20 10:27:39.410970] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-11-20 10:27:39.410989] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.707 [2024-11-20 10:27:39.424805] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.707 [2024-11-20 10:27:39.424824] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.966 [2024-11-20 10:27:39.438860] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.966 [2024-11-20 10:27:39.438880] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.966 [2024-11-20 10:27:39.452348] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.966 [2024-11-20 10:27:39.452372] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.966 [2024-11-20 10:27:39.465624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.966 [2024-11-20 10:27:39.465642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.966 17056.80 IOPS, 133.26 MiB/s [2024-11-20T09:27:39.697Z] [2024-11-20 10:27:39.478047] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.966 [2024-11-20 10:27:39.478067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.966 00:09:58.966 Latency(us) 00:09:58.966 [2024-11-20T09:27:39.697Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:58.966 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:09:58.966 Nvme1n1 : 5.01 17059.01 133.27 0.00 0.00 7496.33 3604.48 15603.81 00:09:58.966 [2024-11-20T09:27:39.697Z] =================================================================================================================== 00:09:58.966 [2024-11-20T09:27:39.697Z] Total : 17059.01 133.27 0.00 0.00 7496.33 3604.48 15603.81 00:09:58.966 [2024-11-20 10:27:39.487682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.966 [2024-11-20 10:27:39.487697] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.966 [2024-11-20 10:27:39.499709] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.966 [2024-11-20 10:27:39.499723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.966 [2024-11-20 10:27:39.511752] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.966 [2024-11-20 10:27:39.511770] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.966 [2024-11-20 10:27:39.523771] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.966 [2024-11-20 10:27:39.523788] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.966 [2024-11-20 10:27:39.535804] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.966 [2024-11-20 10:27:39.535818] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.966 [2024-11-20 10:27:39.547835] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.966 [2024-11-20 10:27:39.547848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.966 [2024-11-20 10:27:39.559867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.966 [2024-11-20 10:27:39.559882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.966 [2024-11-20 10:27:39.571899] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.966 [2024-11-20 10:27:39.571912] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.966 [2024-11-20 10:27:39.583931] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.966 [2024-11-20 10:27:39.583943] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.966 [2024-11-20 10:27:39.595963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.966 [2024-11-20 10:27:39.595974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.966 [2024-11-20 10:27:39.607997] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.966 [2024-11-20 10:27:39.608008] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.966 [2024-11-20 10:27:39.620022] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.966 [2024-11-20 10:27:39.620033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.966 [2024-11-20 10:27:39.632056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:09:58.966 [2024-11-20 10:27:39.632067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:09:58.966 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 37: kill: (3118076) - No such process 00:09:58.966 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@44 -- # wait 3118076 00:09:58.966 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@47 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:09:58.966 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.966 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.966 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.966 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@48 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:09:58.966 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.966 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.966 delay0 00:09:58.966 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.966 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:09:58.966 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:09:58.966 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:58.966 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:09:58.966 10:27:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:09:59.225 [2024-11-20 10:27:39.781795] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:05.910 [2024-11-20 10:27:45.960126] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230b820 is same with the state(6) to be set 00:10:05.910 [2024-11-20 10:27:45.960166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x230b820 is same with the state(6) to be set 00:10:05.910 Initializing NVMe Controllers 00:10:05.910 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:05.910 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:05.910 Initialization complete. Launching workers. 00:10:05.910 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 118 00:10:05.910 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 389, failed to submit 49 00:10:05.910 success 209, unsuccessful 180, failed 0 00:10:05.910 10:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:10:05.910 10:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@55 -- # nvmftestfini 00:10:05.910 10:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:05.910 10:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:10:05.910 10:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:05.910 10:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:10:05.910 10:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:05.910 10:27:45 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:05.910 rmmod nvme_tcp 00:10:05.910 rmmod nvme_fabrics 00:10:05.910 rmmod nvme_keyring 00:10:05.910 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:05.910 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:10:05.910 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:10:05.910 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 3116218 ']' 00:10:05.910 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 3116218 00:10:05.910 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3116218 ']' 00:10:05.910 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3116218 00:10:05.910 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:10:05.910 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:05.910 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3116218 00:10:05.910 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:10:05.910 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:10:05.910 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3116218' 00:10:05.910 killing process with pid 3116218 00:10:05.910 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3116218 00:10:05.910 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3116218 00:10:05.910 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:05.910 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:10:05.911 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@264 -- # local dev 00:10:05.911 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@267 -- # remove_target_ns 00:10:05.911 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:05.911 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:05.911 10:27:46 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@268 -- # delete_main_bridge 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@130 -- # return 0 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/setup.sh@284 -- # iptr 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@542 -- # iptables-save 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@542 -- # iptables-restore 00:10:07.844 00:10:07.844 real 0m31.412s 00:10:07.844 user 0m41.724s 00:10:07.844 sys 0m11.230s 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:07.844 ************************************ 00:10:07.844 END TEST nvmf_zcopy 00:10:07.844 ************************************ 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:10:07.844 00:10:07.844 real 4m30.123s 00:10:07.844 user 10m32.070s 00:10:07.844 sys 1m36.202s 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:07.844 ************************************ 00:10:07.844 END TEST nvmf_target_core 00:10:07.844 ************************************ 00:10:07.844 10:27:48 nvmf_tcp -- nvmf/nvmf.sh@11 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:07.844 10:27:48 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:07.844 10:27:48 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:07.844 10:27:48 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:10:07.844 ************************************ 00:10:07.844 START TEST nvmf_target_extra 00:10:07.844 ************************************ 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:10:07.844 * Looking for test storage... 00:10:07.844 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lcov --version 00:10:07.844 10:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@344 -- # case "$op" in 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@345 -- # : 1 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # decimal 1 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=1 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 1 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # decimal 2 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@353 -- # local d=2 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@355 -- # echo 2 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@368 -- # return 0 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.104 --rc genhtml_branch_coverage=1 00:10:08.104 --rc genhtml_function_coverage=1 00:10:08.104 --rc genhtml_legend=1 00:10:08.104 --rc geninfo_all_blocks=1 00:10:08.104 --rc geninfo_unexecuted_blocks=1 00:10:08.104 00:10:08.104 ' 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.104 --rc genhtml_branch_coverage=1 00:10:08.104 --rc genhtml_function_coverage=1 00:10:08.104 --rc genhtml_legend=1 00:10:08.104 --rc geninfo_all_blocks=1 00:10:08.104 --rc geninfo_unexecuted_blocks=1 00:10:08.104 00:10:08.104 ' 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.104 --rc genhtml_branch_coverage=1 00:10:08.104 --rc genhtml_function_coverage=1 00:10:08.104 --rc genhtml_legend=1 00:10:08.104 --rc geninfo_all_blocks=1 00:10:08.104 --rc geninfo_unexecuted_blocks=1 00:10:08.104 00:10:08.104 ' 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.104 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.104 --rc genhtml_branch_coverage=1 00:10:08.104 --rc genhtml_function_coverage=1 00:10:08.104 --rc genhtml_legend=1 00:10:08.104 --rc geninfo_all_blocks=1 00:10:08.104 --rc geninfo_unexecuted_blocks=1 00:10:08.104 00:10:08.104 ' 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@50 -- # : 0 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:08.104 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:08.104 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:08.105 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:10:08.105 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:10:08.105 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:10:08.105 10:27:48 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:08.105 10:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:08.105 10:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:08.105 10:27:48 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:08.105 ************************************ 00:10:08.105 START TEST nvmf_example 00:10:08.105 ************************************ 00:10:08.105 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:10:08.105 * Looking for test storage... 00:10:08.105 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:08.105 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:08.105 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lcov --version 00:10:08.105 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # IFS=.-: 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@336 -- # read -ra ver1 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # IFS=.-: 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@337 -- # read -ra ver2 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@338 -- # local 'op=<' 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@340 -- # ver1_l=2 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@341 -- # ver2_l=1 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@344 -- # case "$op" in 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@345 -- # : 1 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # decimal 1 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=1 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 1 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@365 -- # ver1[v]=1 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # decimal 2 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@353 -- # local d=2 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@355 -- # echo 2 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@366 -- # ver2[v]=2 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@368 -- # return 0 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:08.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.365 --rc genhtml_branch_coverage=1 00:10:08.365 --rc genhtml_function_coverage=1 00:10:08.365 --rc genhtml_legend=1 00:10:08.365 --rc geninfo_all_blocks=1 00:10:08.365 --rc geninfo_unexecuted_blocks=1 00:10:08.365 00:10:08.365 ' 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:08.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.365 --rc genhtml_branch_coverage=1 00:10:08.365 --rc genhtml_function_coverage=1 00:10:08.365 --rc genhtml_legend=1 00:10:08.365 --rc geninfo_all_blocks=1 00:10:08.365 --rc geninfo_unexecuted_blocks=1 00:10:08.365 00:10:08.365 ' 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:08.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.365 --rc genhtml_branch_coverage=1 00:10:08.365 --rc genhtml_function_coverage=1 00:10:08.365 --rc genhtml_legend=1 00:10:08.365 --rc geninfo_all_blocks=1 00:10:08.365 --rc geninfo_unexecuted_blocks=1 00:10:08.365 00:10:08.365 ' 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:08.365 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:08.365 --rc genhtml_branch_coverage=1 00:10:08.365 --rc genhtml_function_coverage=1 00:10:08.365 --rc genhtml_legend=1 00:10:08.365 --rc geninfo_all_blocks=1 00:10:08.365 --rc geninfo_unexecuted_blocks=1 00:10:08.365 00:10:08.365 ' 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:10:08.365 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@15 -- # shopt -s extglob 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@50 -- # : 0 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:08.366 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # remove_target_ns 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # xtrace_disable 00:10:08.366 10:27:48 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@131 -- # pci_devs=() 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@135 -- # net_devs=() 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@136 -- # e810=() 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@136 -- # local -ga e810 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@137 -- # x722=() 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@137 -- # local -ga x722 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@138 -- # mlx=() 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@138 -- # local -ga mlx 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:14.937 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:14.937 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:14.937 Found net devices under 0000:86:00.0: cvl_0_0 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:14.937 Found net devices under 0000:86:00.1: cvl_0_1 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@262 -- # is_hw=yes 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@257 -- # create_target_ns 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@28 -- # local -g _dev 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # ips=() 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:10:14.937 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772161 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:10:14.938 10.0.0.1 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@11 -- # local val=167772162 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:10:14.938 10.0.0.2 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:14.938 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:14.938 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.447 ms 00:10:14.938 00:10:14.938 --- 10.0.0.1 ping statistics --- 00:10:14.938 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.938 rtt min/avg/max/mdev = 0.447/0.447/0.447/0.000 ms 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=target0 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:10:14.938 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:10:14.938 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:14.939 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:10:14.939 00:10:14.939 --- 10.0.0.2 ping statistics --- 00:10:14.939 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:14.939 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair++ )) 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # return 0 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=initiator1 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # return 1 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev= 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@169 -- # return 0 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=target0 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # get_net_dev target1 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@107 -- # local dev=target1 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@109 -- # return 1 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@168 -- # dev= 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@169 -- # return 0 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:14.939 10:27:54 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:14.939 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:10:14.939 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:10:14.939 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:14.939 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:14.939 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:10:14.939 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:10:14.939 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=3123756 00:10:14.939 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:10:14.939 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:10:14.939 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 3123756 00:10:14.939 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@835 -- # '[' -z 3123756 ']' 00:10:14.939 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:14.939 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:14.939 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:14.939 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:14.939 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:14.939 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.507 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:15.507 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@868 -- # return 0 00:10:15.507 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:10:15.507 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:15.507 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.507 10:27:55 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:10:15.507 10:27:56 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:10:27.711 Initializing NVMe Controllers 00:10:27.711 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:27.711 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:27.711 Initialization complete. Launching workers. 00:10:27.711 ======================================================== 00:10:27.711 Latency(us) 00:10:27.711 Device Information : IOPS MiB/s Average min max 00:10:27.711 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 18232.26 71.22 3509.72 526.30 15439.90 00:10:27.711 ======================================================== 00:10:27.711 Total : 18232.26 71.22 3509.72 526.30 15439.90 00:10:27.711 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # nvmfcleanup 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@99 -- # sync 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@102 -- # set +e 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@103 -- # for i in {1..20} 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:10:27.711 rmmod nvme_tcp 00:10:27.711 rmmod nvme_fabrics 00:10:27.711 rmmod nvme_keyring 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@106 -- # set -e 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@107 -- # return 0 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@336 -- # '[' -n 3123756 ']' 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@337 -- # killprocess 3123756 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # '[' -z 3123756 ']' 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # kill -0 3123756 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # uname 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3123756 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@960 -- # process_name=nvmf 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@964 -- # '[' nvmf = sudo ']' 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3123756' 00:10:27.711 killing process with pid 3123756 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@973 -- # kill 3123756 00:10:27.711 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@978 -- # wait 3123756 00:10:27.711 nvmf threads initialize successfully 00:10:27.711 bdev subsystem init successfully 00:10:27.711 created a nvmf target service 00:10:27.711 create targets's poll groups done 00:10:27.711 all subsystems of target started 00:10:27.711 nvmf target is running 00:10:27.711 all subsystems of target stopped 00:10:27.712 destroy targets's poll groups done 00:10:27.712 destroyed the nvmf target service 00:10:27.712 bdev subsystem finish successfully 00:10:27.712 nvmf threads destroy successfully 00:10:27.712 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:10:27.712 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # nvmf_fini 00:10:27.712 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@264 -- # local dev 00:10:27.712 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@267 -- # remove_target_ns 00:10:27.712 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:27.712 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:27.712 10:28:06 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@268 -- # delete_main_bridge 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@130 -- # return 0 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # _dev=0 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@41 -- # dev_map=() 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/setup.sh@284 -- # iptr 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@542 -- # iptables-save 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@542 -- # iptables-restore 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.970 00:10:27.970 real 0m19.956s 00:10:27.970 user 0m46.107s 00:10:27.970 sys 0m6.134s 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:27.970 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:10:27.970 ************************************ 00:10:27.970 END TEST nvmf_example 00:10:27.971 ************************************ 00:10:27.971 10:28:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:27.971 10:28:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:27.971 10:28:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:27.971 10:28:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:10:28.232 ************************************ 00:10:28.232 START TEST nvmf_filesystem 00:10:28.232 ************************************ 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:10:28.232 * Looking for test storage... 00:10:28.232 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:28.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.232 --rc genhtml_branch_coverage=1 00:10:28.232 --rc genhtml_function_coverage=1 00:10:28.232 --rc genhtml_legend=1 00:10:28.232 --rc geninfo_all_blocks=1 00:10:28.232 --rc geninfo_unexecuted_blocks=1 00:10:28.232 00:10:28.232 ' 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:28.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.232 --rc genhtml_branch_coverage=1 00:10:28.232 --rc genhtml_function_coverage=1 00:10:28.232 --rc genhtml_legend=1 00:10:28.232 --rc geninfo_all_blocks=1 00:10:28.232 --rc geninfo_unexecuted_blocks=1 00:10:28.232 00:10:28.232 ' 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:28.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.232 --rc genhtml_branch_coverage=1 00:10:28.232 --rc genhtml_function_coverage=1 00:10:28.232 --rc genhtml_legend=1 00:10:28.232 --rc geninfo_all_blocks=1 00:10:28.232 --rc geninfo_unexecuted_blocks=1 00:10:28.232 00:10:28.232 ' 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:28.232 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.232 --rc genhtml_branch_coverage=1 00:10:28.232 --rc genhtml_function_coverage=1 00:10:28.232 --rc genhtml_legend=1 00:10:28.232 --rc geninfo_all_blocks=1 00:10:28.232 --rc geninfo_unexecuted_blocks=1 00:10:28.232 00:10:28.232 ' 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_CET=n 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:10:28.232 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_XNVME=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_FC=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@90 -- # CONFIG_URING=n 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:10:28.233 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:10:28.233 #define SPDK_CONFIG_H 00:10:28.233 #define SPDK_CONFIG_AIO_FSDEV 1 00:10:28.233 #define SPDK_CONFIG_APPS 1 00:10:28.233 #define SPDK_CONFIG_ARCH native 00:10:28.233 #undef SPDK_CONFIG_ASAN 00:10:28.233 #undef SPDK_CONFIG_AVAHI 00:10:28.233 #undef SPDK_CONFIG_CET 00:10:28.233 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:10:28.233 #define SPDK_CONFIG_COVERAGE 1 00:10:28.233 #define SPDK_CONFIG_CROSS_PREFIX 00:10:28.233 #undef SPDK_CONFIG_CRYPTO 00:10:28.233 #undef SPDK_CONFIG_CRYPTO_MLX5 00:10:28.233 #undef SPDK_CONFIG_CUSTOMOCF 00:10:28.233 #undef SPDK_CONFIG_DAOS 00:10:28.233 #define SPDK_CONFIG_DAOS_DIR 00:10:28.233 #define SPDK_CONFIG_DEBUG 1 00:10:28.233 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:10:28.233 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:10:28.233 #define SPDK_CONFIG_DPDK_INC_DIR 00:10:28.233 #define SPDK_CONFIG_DPDK_LIB_DIR 00:10:28.233 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:10:28.233 #undef SPDK_CONFIG_DPDK_UADK 00:10:28.233 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:10:28.233 #define SPDK_CONFIG_EXAMPLES 1 00:10:28.233 #undef SPDK_CONFIG_FC 00:10:28.233 #define SPDK_CONFIG_FC_PATH 00:10:28.233 #define SPDK_CONFIG_FIO_PLUGIN 1 00:10:28.233 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:10:28.233 #define SPDK_CONFIG_FSDEV 1 00:10:28.233 #undef SPDK_CONFIG_FUSE 00:10:28.233 #undef SPDK_CONFIG_FUZZER 00:10:28.233 #define SPDK_CONFIG_FUZZER_LIB 00:10:28.233 #undef SPDK_CONFIG_GOLANG 00:10:28.233 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:10:28.233 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:10:28.233 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:10:28.233 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:10:28.233 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:10:28.233 #undef SPDK_CONFIG_HAVE_LIBBSD 00:10:28.233 #undef SPDK_CONFIG_HAVE_LZ4 00:10:28.233 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:10:28.233 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:10:28.233 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:10:28.233 #define SPDK_CONFIG_IDXD 1 00:10:28.233 #define SPDK_CONFIG_IDXD_KERNEL 1 00:10:28.233 #undef SPDK_CONFIG_IPSEC_MB 00:10:28.233 #define SPDK_CONFIG_IPSEC_MB_DIR 00:10:28.233 #define SPDK_CONFIG_ISAL 1 00:10:28.233 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:10:28.233 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:10:28.233 #define SPDK_CONFIG_LIBDIR 00:10:28.233 #undef SPDK_CONFIG_LTO 00:10:28.233 #define SPDK_CONFIG_MAX_LCORES 128 00:10:28.233 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:10:28.233 #define SPDK_CONFIG_NVME_CUSE 1 00:10:28.234 #undef SPDK_CONFIG_OCF 00:10:28.234 #define SPDK_CONFIG_OCF_PATH 00:10:28.234 #define SPDK_CONFIG_OPENSSL_PATH 00:10:28.234 #undef SPDK_CONFIG_PGO_CAPTURE 00:10:28.234 #define SPDK_CONFIG_PGO_DIR 00:10:28.234 #undef SPDK_CONFIG_PGO_USE 00:10:28.234 #define SPDK_CONFIG_PREFIX /usr/local 00:10:28.234 #undef SPDK_CONFIG_RAID5F 00:10:28.234 #undef SPDK_CONFIG_RBD 00:10:28.234 #define SPDK_CONFIG_RDMA 1 00:10:28.234 #define SPDK_CONFIG_RDMA_PROV verbs 00:10:28.234 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:10:28.234 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:10:28.234 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:10:28.234 #define SPDK_CONFIG_SHARED 1 00:10:28.234 #undef SPDK_CONFIG_SMA 00:10:28.234 #define SPDK_CONFIG_TESTS 1 00:10:28.234 #undef SPDK_CONFIG_TSAN 00:10:28.234 #define SPDK_CONFIG_UBLK 1 00:10:28.234 #define SPDK_CONFIG_UBSAN 1 00:10:28.234 #undef SPDK_CONFIG_UNIT_TESTS 00:10:28.234 #undef SPDK_CONFIG_URING 00:10:28.234 #define SPDK_CONFIG_URING_PATH 00:10:28.234 #undef SPDK_CONFIG_URING_ZNS 00:10:28.234 #undef SPDK_CONFIG_USDT 00:10:28.234 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:10:28.234 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:10:28.234 #define SPDK_CONFIG_VFIO_USER 1 00:10:28.234 #define SPDK_CONFIG_VFIO_USER_DIR 00:10:28.234 #define SPDK_CONFIG_VHOST 1 00:10:28.234 #define SPDK_CONFIG_VIRTIO 1 00:10:28.234 #undef SPDK_CONFIG_VTUNE 00:10:28.234 #define SPDK_CONFIG_VTUNE_DIR 00:10:28.234 #define SPDK_CONFIG_WERROR 1 00:10:28.234 #define SPDK_CONFIG_WPDK_DIR 00:10:28.234 #undef SPDK_CONFIG_XNVME 00:10:28.234 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:10:28.234 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:10:28.496 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 1 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : true 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:10:28.497 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@166 -- # : 0 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@173 -- # : 0 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # : 0 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # : 0 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@191 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@206 -- # cat 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:10:28.498 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:10:28.499 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:10:28.499 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # _LCOV= 00:10:28.499 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:10:28.499 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:10:28.499 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:10:28.499 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:10:28.499 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@275 -- # lcov_opt= 00:10:28.499 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:10:28.499 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # export valgrind= 00:10:28.499 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # valgrind= 00:10:28.499 10:28:08 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # uname -s 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@289 -- # MAKE=make 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j96 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@309 -- # TEST_MODE= 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@310 -- # for i in "$@" 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@311 -- # case "$i" in 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@316 -- # TEST_TRANSPORT=tcp 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # [[ -z 3126160 ]] 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # kill -0 3126160 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1678 -- # set_test_storage 2147483648 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@344 -- # local mount target_dir 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.R2IDz1 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.R2IDz1/tests/target /tmp/spdk.R2IDz1 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # df -T 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_devtmpfs 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=67108864 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=67108864 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/pmem0 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=ext2 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=4096 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=5284429824 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=5284425728 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=spdk_root 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=overlay 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=189162377216 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=195963973632 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=6801596416 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97973792768 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981984768 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=8192000 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=39169753088 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=39192797184 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=23044096 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=97981329408 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=97981988864 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=659456 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # avails["$mount"]=19596382208 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # sizes["$mount"]=19596394496 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:10:28.499 * Looking for test storage... 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # local target_space new_size 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.499 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@385 -- # mount=/ 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # target_space=189162377216 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == tmpfs ]] 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ overlay == ramfs ]] 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@393 -- # [[ / == / ]] 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@394 -- # new_size=9016188928 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@395 -- # (( new_size * 100 / sizes[/] > 95 )) 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.500 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@402 -- # return 0 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set -o errtrace 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1681 -- # shopt -s extdebug 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1685 -- # true 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # xtrace_fd 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lcov --version 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # IFS=.-: 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@336 -- # read -ra ver1 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # IFS=.-: 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@337 -- # read -ra ver2 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@338 -- # local 'op=<' 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@340 -- # ver1_l=2 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@341 -- # ver2_l=1 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@344 -- # case "$op" in 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@345 -- # : 1 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # decimal 1 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=1 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 1 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@365 -- # ver1[v]=1 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # decimal 2 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@353 -- # local d=2 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@355 -- # echo 2 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@366 -- # ver2[v]=2 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@368 -- # return 0 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:10:28.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.500 --rc genhtml_branch_coverage=1 00:10:28.500 --rc genhtml_function_coverage=1 00:10:28.500 --rc genhtml_legend=1 00:10:28.500 --rc geninfo_all_blocks=1 00:10:28.500 --rc geninfo_unexecuted_blocks=1 00:10:28.500 00:10:28.500 ' 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:10:28.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.500 --rc genhtml_branch_coverage=1 00:10:28.500 --rc genhtml_function_coverage=1 00:10:28.500 --rc genhtml_legend=1 00:10:28.500 --rc geninfo_all_blocks=1 00:10:28.500 --rc geninfo_unexecuted_blocks=1 00:10:28.500 00:10:28.500 ' 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:10:28.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.500 --rc genhtml_branch_coverage=1 00:10:28.500 --rc genhtml_function_coverage=1 00:10:28.500 --rc genhtml_legend=1 00:10:28.500 --rc geninfo_all_blocks=1 00:10:28.500 --rc geninfo_unexecuted_blocks=1 00:10:28.500 00:10:28.500 ' 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:10:28.500 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:10:28.500 --rc genhtml_branch_coverage=1 00:10:28.500 --rc genhtml_function_coverage=1 00:10:28.500 --rc genhtml_legend=1 00:10:28.500 --rc geninfo_all_blocks=1 00:10:28.500 --rc geninfo_unexecuted_blocks=1 00:10:28.500 00:10:28.500 ' 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:28.500 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@15 -- # shopt -s extglob 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@50 -- # : 0 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:10:28.501 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # remove_target_ns 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # xtrace_disable 00:10:28.501 10:28:09 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@131 -- # pci_devs=() 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@135 -- # net_devs=() 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@136 -- # e810=() 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@136 -- # local -ga e810 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@137 -- # x722=() 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@137 -- # local -ga x722 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@138 -- # mlx=() 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@138 -- # local -ga mlx 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:35.070 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:35.070 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:35.070 Found net devices under 0000:86:00.0: cvl_0_0 00:10:35.070 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:35.071 Found net devices under 0000:86:00.1: cvl_0_1 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@262 -- # is_hw=yes 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@257 -- # create_target_ns 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@28 -- # local -g _dev 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # ips=() 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:10:35.071 10:28:14 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772161 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:10:35.071 10.0.0.1 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@11 -- # local val=167772162 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:10:35.071 10.0.0.2 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:10:35.071 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:10:35.072 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:35.072 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.483 ms 00:10:35.072 00:10:35.072 --- 10.0.0.1 ping statistics --- 00:10:35.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.072 rtt min/avg/max/mdev = 0.483/0.483/0.483/0.000 ms 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=target0 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:10:35.072 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:35.072 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.197 ms 00:10:35.072 00:10:35.072 --- 10.0.0.2 ping statistics --- 00:10:35.072 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:35.072 rtt min/avg/max/mdev = 0.197/0.197/0.197/0.000 ms 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair++ )) 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # return 0 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:10:35.072 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=initiator1 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # return 1 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev= 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@169 -- # return 0 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=target0 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # get_net_dev target1 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@107 -- # local dev=target1 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@109 -- # return 1 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@168 -- # dev= 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@169 -- # return 0 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:35.073 ************************************ 00:10:35.073 START TEST nvmf_filesystem_no_in_capsule 00:10:35.073 ************************************ 00:10:35.073 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 0 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=3129283 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 3129283 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3129283 ']' 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:35.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.074 [2024-11-20 10:28:15.393751] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:10:35.074 [2024-11-20 10:28:15.393801] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:35.074 [2024-11-20 10:28:15.473550] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:35.074 [2024-11-20 10:28:15.517492] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:35.074 [2024-11-20 10:28:15.517531] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:35.074 [2024-11-20 10:28:15.517538] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:35.074 [2024-11-20 10:28:15.517544] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:35.074 [2024-11-20 10:28:15.517550] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:35.074 [2024-11-20 10:28:15.519143] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:35.074 [2024-11-20 10:28:15.519265] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:35.074 [2024-11-20 10:28:15.519297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:35.074 [2024-11-20 10:28:15.519297] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.074 [2024-11-20 10:28:15.656390] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.074 Malloc1 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.074 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.333 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.334 [2024-11-20 10:28:15.807548] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:35.334 { 00:10:35.334 "name": "Malloc1", 00:10:35.334 "aliases": [ 00:10:35.334 "61ebca71-4494-4ee3-a2fe-2bbe233c20e0" 00:10:35.334 ], 00:10:35.334 "product_name": "Malloc disk", 00:10:35.334 "block_size": 512, 00:10:35.334 "num_blocks": 1048576, 00:10:35.334 "uuid": "61ebca71-4494-4ee3-a2fe-2bbe233c20e0", 00:10:35.334 "assigned_rate_limits": { 00:10:35.334 "rw_ios_per_sec": 0, 00:10:35.334 "rw_mbytes_per_sec": 0, 00:10:35.334 "r_mbytes_per_sec": 0, 00:10:35.334 "w_mbytes_per_sec": 0 00:10:35.334 }, 00:10:35.334 "claimed": true, 00:10:35.334 "claim_type": "exclusive_write", 00:10:35.334 "zoned": false, 00:10:35.334 "supported_io_types": { 00:10:35.334 "read": true, 00:10:35.334 "write": true, 00:10:35.334 "unmap": true, 00:10:35.334 "flush": true, 00:10:35.334 "reset": true, 00:10:35.334 "nvme_admin": false, 00:10:35.334 "nvme_io": false, 00:10:35.334 "nvme_io_md": false, 00:10:35.334 "write_zeroes": true, 00:10:35.334 "zcopy": true, 00:10:35.334 "get_zone_info": false, 00:10:35.334 "zone_management": false, 00:10:35.334 "zone_append": false, 00:10:35.334 "compare": false, 00:10:35.334 "compare_and_write": false, 00:10:35.334 "abort": true, 00:10:35.334 "seek_hole": false, 00:10:35.334 "seek_data": false, 00:10:35.334 "copy": true, 00:10:35.334 "nvme_iov_md": false 00:10:35.334 }, 00:10:35.334 "memory_domains": [ 00:10:35.334 { 00:10:35.334 "dma_device_id": "system", 00:10:35.334 "dma_device_type": 1 00:10:35.334 }, 00:10:35.334 { 00:10:35.334 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:35.334 "dma_device_type": 2 00:10:35.334 } 00:10:35.334 ], 00:10:35.334 "driver_specific": {} 00:10:35.334 } 00:10:35.334 ]' 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:35.334 10:28:15 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:36.709 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:36.709 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:36.709 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:36.709 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:36.709 10:28:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:38.612 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:38.612 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:38.612 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:38.612 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:38.612 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:38.612 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:38.612 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:38.612 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:38.612 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:38.612 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:38.612 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:38.612 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:38.612 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:38.612 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:38.612 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:38.612 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:38.612 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:38.612 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:38.871 10:28:19 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:39.807 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:10:39.807 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:39.807 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:39.807 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:39.807 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:39.807 ************************************ 00:10:39.807 START TEST filesystem_ext4 00:10:39.807 ************************************ 00:10:39.807 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:39.807 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:39.807 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:39.807 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:39.807 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:39.807 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:39.807 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:39.807 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:39.807 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:39.807 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:39.807 10:28:20 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:39.807 mke2fs 1.47.0 (5-Feb-2023) 00:10:39.807 Discarding device blocks: 0/522240 done 00:10:40.065 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:40.065 Filesystem UUID: 8bd27436-dd4c-4c75-9e43-db998ad26895 00:10:40.065 Superblock backups stored on blocks: 00:10:40.065 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:40.065 00:10:40.065 Allocating group tables: 0/64 done 00:10:40.065 Writing inode tables: 0/64 done 00:10:42.592 Creating journal (8192 blocks): done 00:10:44.093 Writing superblocks and filesystem accounting information: 0/64 done 00:10:44.093 00:10:44.093 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@949 -- # return 0 00:10:44.093 10:28:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 3129283 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:50.657 00:10:50.657 real 0m10.116s 00:10:50.657 user 0m0.026s 00:10:50.657 sys 0m0.077s 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:10:50.657 ************************************ 00:10:50.657 END TEST filesystem_ext4 00:10:50.657 ************************************ 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.657 ************************************ 00:10:50.657 START TEST filesystem_btrfs 00:10:50.657 ************************************ 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@933 -- # local force 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:10:50.657 btrfs-progs v6.8.1 00:10:50.657 See https://btrfs.readthedocs.io for more information. 00:10:50.657 00:10:50.657 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:10:50.657 NOTE: several default settings have changed in version 5.15, please make sure 00:10:50.657 this does not affect your deployments: 00:10:50.657 - DUP for metadata (-m dup) 00:10:50.657 - enabled no-holes (-O no-holes) 00:10:50.657 - enabled free-space-tree (-R free-space-tree) 00:10:50.657 00:10:50.657 Label: (null) 00:10:50.657 UUID: e38d39ed-3c2d-4c7e-94b0-33a21f27cbf0 00:10:50.657 Node size: 16384 00:10:50.657 Sector size: 4096 (CPU page size: 4096) 00:10:50.657 Filesystem size: 510.00MiB 00:10:50.657 Block group profiles: 00:10:50.657 Data: single 8.00MiB 00:10:50.657 Metadata: DUP 32.00MiB 00:10:50.657 System: DUP 8.00MiB 00:10:50.657 SSD detected: yes 00:10:50.657 Zoned device: no 00:10:50.657 Features: extref, skinny-metadata, no-holes, free-space-tree 00:10:50.657 Checksum: crc32c 00:10:50.657 Number of devices: 1 00:10:50.657 Devices: 00:10:50.657 ID SIZE PATH 00:10:50.657 1 510.00MiB /dev/nvme0n1p1 00:10:50.657 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@949 -- # return 0 00:10:50.657 10:28:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 3129283 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:50.657 00:10:50.657 real 0m0.445s 00:10:50.657 user 0m0.030s 00:10:50.657 sys 0m0.104s 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:10:50.657 ************************************ 00:10:50.657 END TEST filesystem_btrfs 00:10:50.657 ************************************ 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:50.657 ************************************ 00:10:50.657 START TEST filesystem_xfs 00:10:50.657 ************************************ 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # local i=0 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@933 -- # local force 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@938 -- # force=-f 00:10:50.657 10:28:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:10:50.657 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:10:50.657 = sectsz=512 attr=2, projid32bit=1 00:10:50.657 = crc=1 finobt=1, sparse=1, rmapbt=0 00:10:50.657 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:10:50.657 data = bsize=4096 blocks=130560, imaxpct=25 00:10:50.657 = sunit=0 swidth=0 blks 00:10:50.657 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:10:50.657 log =internal log bsize=4096 blocks=16384, version=2 00:10:50.657 = sectsz=512 sunit=0 blks, lazy-count=1 00:10:50.657 realtime =none extsz=4096 blocks=0, rtextents=0 00:10:51.591 Discarding blocks...Done. 00:10:51.591 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@949 -- # return 0 00:10:51.591 10:28:32 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:10:53.494 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:10:53.494 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:10:53.494 10:28:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:10:53.494 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:10:53.494 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:10:53.494 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:10:53.494 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 3129283 00:10:53.494 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:10:53.494 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:10:53.494 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:10:53.494 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:10:53.494 00:10:53.494 real 0m2.909s 00:10:53.494 user 0m0.022s 00:10:53.494 sys 0m0.076s 00:10:53.494 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:53.494 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:10:53.494 ************************************ 00:10:53.494 END TEST filesystem_xfs 00:10:53.494 ************************************ 00:10:53.494 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:10:53.494 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:10:53.494 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:53.753 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 3129283 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3129283 ']' 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3129283 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # uname 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3129283 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3129283' 00:10:53.753 killing process with pid 3129283 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@973 -- # kill 3129283 00:10:53.753 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@978 -- # wait 3129283 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:10:54.012 00:10:54.012 real 0m19.315s 00:10:54.012 user 1m16.059s 00:10:54.012 sys 0m1.434s 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.012 ************************************ 00:10:54.012 END TEST nvmf_filesystem_no_in_capsule 00:10:54.012 ************************************ 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:10:54.012 ************************************ 00:10:54.012 START TEST nvmf_filesystem_in_capsule 00:10:54.012 ************************************ 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1129 -- # nvmf_filesystem_part 4096 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@726 -- # xtrace_disable 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@328 -- # nvmfpid=3132702 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@329 -- # waitforlisten 3132702 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@835 -- # '[' -z 3132702 ']' 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:54.012 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:54.012 10:28:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.270 [2024-11-20 10:28:34.787087] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:10:54.270 [2024-11-20 10:28:34.787131] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:54.270 [2024-11-20 10:28:34.855906] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:54.270 [2024-11-20 10:28:34.899939] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:54.270 [2024-11-20 10:28:34.899979] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:54.270 [2024-11-20 10:28:34.899987] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:54.270 [2024-11-20 10:28:34.899994] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:54.270 [2024-11-20 10:28:34.899999] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:54.270 [2024-11-20 10:28:34.901539] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:10:54.270 [2024-11-20 10:28:34.901581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:10:54.270 [2024-11-20 10:28:34.901706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:10:54.270 [2024-11-20 10:28:34.901706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@868 -- # return 0 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@732 -- # xtrace_disable 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.529 [2024-11-20 10:28:35.045632] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.529 Malloc1 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.529 [2024-11-20 10:28:35.187629] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # local bdev_name=Malloc1 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # local bdev_info 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # local bs 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1385 -- # local nb 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:54.529 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:10:54.529 { 00:10:54.529 "name": "Malloc1", 00:10:54.529 "aliases": [ 00:10:54.529 "dabf8ffc-7fb9-48ae-ad79-8fb714506e18" 00:10:54.529 ], 00:10:54.529 "product_name": "Malloc disk", 00:10:54.529 "block_size": 512, 00:10:54.529 "num_blocks": 1048576, 00:10:54.529 "uuid": "dabf8ffc-7fb9-48ae-ad79-8fb714506e18", 00:10:54.529 "assigned_rate_limits": { 00:10:54.529 "rw_ios_per_sec": 0, 00:10:54.529 "rw_mbytes_per_sec": 0, 00:10:54.529 "r_mbytes_per_sec": 0, 00:10:54.529 "w_mbytes_per_sec": 0 00:10:54.529 }, 00:10:54.529 "claimed": true, 00:10:54.529 "claim_type": "exclusive_write", 00:10:54.529 "zoned": false, 00:10:54.529 "supported_io_types": { 00:10:54.529 "read": true, 00:10:54.529 "write": true, 00:10:54.529 "unmap": true, 00:10:54.529 "flush": true, 00:10:54.529 "reset": true, 00:10:54.529 "nvme_admin": false, 00:10:54.529 "nvme_io": false, 00:10:54.529 "nvme_io_md": false, 00:10:54.529 "write_zeroes": true, 00:10:54.529 "zcopy": true, 00:10:54.529 "get_zone_info": false, 00:10:54.529 "zone_management": false, 00:10:54.529 "zone_append": false, 00:10:54.529 "compare": false, 00:10:54.529 "compare_and_write": false, 00:10:54.529 "abort": true, 00:10:54.529 "seek_hole": false, 00:10:54.529 "seek_data": false, 00:10:54.529 "copy": true, 00:10:54.529 "nvme_iov_md": false 00:10:54.529 }, 00:10:54.529 "memory_domains": [ 00:10:54.529 { 00:10:54.529 "dma_device_id": "system", 00:10:54.530 "dma_device_type": 1 00:10:54.530 }, 00:10:54.530 { 00:10:54.530 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:10:54.530 "dma_device_type": 2 00:10:54.530 } 00:10:54.530 ], 00:10:54.530 "driver_specific": {} 00:10:54.530 } 00:10:54.530 ]' 00:10:54.530 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:10:54.788 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bs=512 00:10:54.788 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:10:54.788 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # nb=1048576 00:10:54.788 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1391 -- # bdev_size=512 00:10:54.788 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1392 -- # echo 512 00:10:54.788 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:10:54.788 10:28:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:55.723 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:10:55.723 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1202 -- # local i=0 00:10:55.724 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:10:55.724 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:10:55.724 10:28:36 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1209 -- # sleep 2 00:10:57.728 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:10:57.728 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:10:57.728 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:10:57.728 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:10:57.728 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:10:57.728 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1212 -- # return 0 00:10:57.728 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:10:57.728 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:10:57.728 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:10:57.728 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:10:57.728 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:10:57.728 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:10:57.728 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:10:57.728 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:10:57.728 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:10:57.728 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:10:57.728 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:10:57.986 10:28:38 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:10:58.552 10:28:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:10:59.929 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:10:59.929 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:10:59.929 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:10:59.929 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:10:59.929 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:10:59.929 ************************************ 00:10:59.929 START TEST filesystem_in_capsule_ext4 00:10:59.929 ************************************ 00:10:59.929 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create ext4 nvme0n1 00:10:59.929 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:10:59.929 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:10:59.929 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:10:59.929 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # local fstype=ext4 00:10:59.929 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:10:59.929 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@932 -- # local i=0 00:10:59.929 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@933 -- # local force 00:10:59.929 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # '[' ext4 = ext4 ']' 00:10:59.929 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@936 -- # force=-F 00:10:59.929 10:28:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@941 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:10:59.929 mke2fs 1.47.0 (5-Feb-2023) 00:10:59.929 Discarding device blocks: 0/522240 done 00:10:59.929 Creating filesystem with 522240 1k blocks and 130560 inodes 00:10:59.929 Filesystem UUID: ae81a319-78dc-40de-9056-173ef887d650 00:10:59.929 Superblock backups stored on blocks: 00:10:59.929 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:10:59.929 00:10:59.929 Allocating group tables: 0/64 done 00:10:59.929 Writing inode tables: 0/64 done 00:11:00.864 Creating journal (8192 blocks): done 00:11:00.864 Writing superblocks and filesystem accounting information: 0/64 done 00:11:00.864 00:11:00.864 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@949 -- # return 0 00:11:00.864 10:28:41 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:06.130 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:06.130 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:06.130 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:06.130 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:06.388 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:06.388 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:06.388 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 3132702 00:11:06.388 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:06.389 00:11:06.389 real 0m6.649s 00:11:06.389 user 0m0.033s 00:11:06.389 sys 0m0.064s 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:06.389 ************************************ 00:11:06.389 END TEST filesystem_in_capsule_ext4 00:11:06.389 ************************************ 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:06.389 ************************************ 00:11:06.389 START TEST filesystem_in_capsule_btrfs 00:11:06.389 ************************************ 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@930 -- # local fstype=btrfs 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # local i=0 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@933 -- # local force 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # '[' btrfs = ext4 ']' 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@938 -- # force=-f 00:11:06.389 10:28:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@941 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:06.648 btrfs-progs v6.8.1 00:11:06.648 See https://btrfs.readthedocs.io for more information. 00:11:06.648 00:11:06.648 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:06.648 NOTE: several default settings have changed in version 5.15, please make sure 00:11:06.648 this does not affect your deployments: 00:11:06.648 - DUP for metadata (-m dup) 00:11:06.648 - enabled no-holes (-O no-holes) 00:11:06.648 - enabled free-space-tree (-R free-space-tree) 00:11:06.648 00:11:06.648 Label: (null) 00:11:06.648 UUID: e862fd78-bfdc-4b91-b82d-eb526b506647 00:11:06.648 Node size: 16384 00:11:06.648 Sector size: 4096 (CPU page size: 4096) 00:11:06.648 Filesystem size: 510.00MiB 00:11:06.648 Block group profiles: 00:11:06.648 Data: single 8.00MiB 00:11:06.648 Metadata: DUP 32.00MiB 00:11:06.648 System: DUP 8.00MiB 00:11:06.648 SSD detected: yes 00:11:06.648 Zoned device: no 00:11:06.648 Features: extref, skinny-metadata, no-holes, free-space-tree 00:11:06.648 Checksum: crc32c 00:11:06.648 Number of devices: 1 00:11:06.648 Devices: 00:11:06.648 ID SIZE PATH 00:11:06.648 1 510.00MiB /dev/nvme0n1p1 00:11:06.648 00:11:06.648 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@949 -- # return 0 00:11:06.648 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:07.214 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:07.214 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:07.214 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:07.214 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:07.214 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:07.214 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:07.214 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 3132702 00:11:07.214 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:07.215 00:11:07.215 real 0m0.744s 00:11:07.215 user 0m0.026s 00:11:07.215 sys 0m0.115s 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:07.215 ************************************ 00:11:07.215 END TEST filesystem_in_capsule_btrfs 00:11:07.215 ************************************ 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:07.215 ************************************ 00:11:07.215 START TEST filesystem_in_capsule_xfs 00:11:07.215 ************************************ 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1129 -- # nvmf_filesystem_create xfs nvme0n1 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@930 -- # local fstype=xfs 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@931 -- # local dev_name=/dev/nvme0n1p1 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # local i=0 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@933 -- # local force 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # '[' xfs = ext4 ']' 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@938 -- # force=-f 00:11:07.215 10:28:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@941 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:07.781 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:07.781 = sectsz=512 attr=2, projid32bit=1 00:11:07.781 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:07.781 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:07.781 data = bsize=4096 blocks=130560, imaxpct=25 00:11:07.781 = sunit=0 swidth=0 blks 00:11:07.781 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:07.781 log =internal log bsize=4096 blocks=16384, version=2 00:11:07.781 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:07.781 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:08.715 Discarding blocks...Done. 00:11:08.715 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@949 -- # return 0 00:11:08.715 10:28:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:10.616 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:10.616 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:10.616 10:28:50 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:10.616 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:10.616 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:10.616 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:10.616 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 3132702 00:11:10.616 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:10.616 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:10.616 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:10.616 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:10.616 00:11:10.616 real 0m3.251s 00:11:10.616 user 0m0.023s 00:11:10.616 sys 0m0.076s 00:11:10.616 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:10.616 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:10.616 ************************************ 00:11:10.616 END TEST filesystem_in_capsule_xfs 00:11:10.616 ************************************ 00:11:10.616 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:10.875 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1223 -- # local i=0 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1235 -- # return 0 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 3132702 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # '[' -z 3132702 ']' 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # kill -0 3132702 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # uname 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:10.875 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3132702 00:11:11.134 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:11.134 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:11.134 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3132702' 00:11:11.134 killing process with pid 3132702 00:11:11.134 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@973 -- # kill 3132702 00:11:11.134 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@978 -- # wait 3132702 00:11:11.393 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:11.393 00:11:11.393 real 0m17.191s 00:11:11.393 user 1m7.668s 00:11:11.393 sys 0m1.406s 00:11:11.393 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:11.393 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:11.393 ************************************ 00:11:11.393 END TEST nvmf_filesystem_in_capsule 00:11:11.393 ************************************ 00:11:11.393 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:11:11.393 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:11.393 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@99 -- # sync 00:11:11.393 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:11.393 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@102 -- # set +e 00:11:11.393 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:11.393 10:28:51 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:11.393 rmmod nvme_tcp 00:11:11.393 rmmod nvme_fabrics 00:11:11.393 rmmod nvme_keyring 00:11:11.393 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:11.393 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@106 -- # set -e 00:11:11.393 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@107 -- # return 0 00:11:11.393 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:11:11.393 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:11.393 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # nvmf_fini 00:11:11.393 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@264 -- # local dev 00:11:11.393 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@267 -- # remove_target_ns 00:11:11.393 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:11.393 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:11.393 10:28:52 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@268 -- # delete_main_bridge 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@130 -- # return 0 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # _dev=0 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@41 -- # dev_map=() 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/setup.sh@284 -- # iptr 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@542 -- # iptables-save 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@542 -- # iptables-restore 00:11:13.927 00:11:13.927 real 0m45.381s 00:11:13.927 user 2m25.835s 00:11:13.927 sys 0m7.640s 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:13.927 ************************************ 00:11:13.927 END TEST nvmf_filesystem 00:11:13.927 ************************************ 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:13.927 ************************************ 00:11:13.927 START TEST nvmf_target_discovery 00:11:13.927 ************************************ 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:11:13.927 * Looking for test storage... 00:11:13.927 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@344 -- # case "$op" in 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@345 -- # : 1 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # decimal 1 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=1 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 1 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # decimal 2 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@353 -- # local d=2 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@355 -- # echo 2 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@368 -- # return 0 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:13.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.927 --rc genhtml_branch_coverage=1 00:11:13.927 --rc genhtml_function_coverage=1 00:11:13.927 --rc genhtml_legend=1 00:11:13.927 --rc geninfo_all_blocks=1 00:11:13.927 --rc geninfo_unexecuted_blocks=1 00:11:13.927 00:11:13.927 ' 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:13.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.927 --rc genhtml_branch_coverage=1 00:11:13.927 --rc genhtml_function_coverage=1 00:11:13.927 --rc genhtml_legend=1 00:11:13.927 --rc geninfo_all_blocks=1 00:11:13.927 --rc geninfo_unexecuted_blocks=1 00:11:13.927 00:11:13.927 ' 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:13.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.927 --rc genhtml_branch_coverage=1 00:11:13.927 --rc genhtml_function_coverage=1 00:11:13.927 --rc genhtml_legend=1 00:11:13.927 --rc geninfo_all_blocks=1 00:11:13.927 --rc geninfo_unexecuted_blocks=1 00:11:13.927 00:11:13.927 ' 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:13.927 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:13.927 --rc genhtml_branch_coverage=1 00:11:13.927 --rc genhtml_function_coverage=1 00:11:13.927 --rc genhtml_legend=1 00:11:13.927 --rc geninfo_all_blocks=1 00:11:13.927 --rc geninfo_unexecuted_blocks=1 00:11:13.927 00:11:13.927 ' 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:13.927 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@50 -- # : 0 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:13.928 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # nvmftestinit 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # xtrace_disable 00:11:13.928 10:28:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@131 -- # pci_devs=() 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@135 -- # net_devs=() 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@136 -- # e810=() 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@136 -- # local -ga e810 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@137 -- # x722=() 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@137 -- # local -ga x722 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@138 -- # mlx=() 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@138 -- # local -ga mlx 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:20.498 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:20.498 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:20.498 Found net devices under 0000:86:00.0: cvl_0_0 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:20.498 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:20.499 Found net devices under 0000:86:00.1: cvl_0_1 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@262 -- # is_hw=yes 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@257 -- # create_target_ns 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # ips=() 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:20.499 10.0.0.1 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:20.499 10.0.0.2 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:20.499 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:20.500 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:20.500 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.405 ms 00:11:20.500 00:11:20.500 --- 10.0.0.1 ping statistics --- 00:11:20.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.500 rtt min/avg/max/mdev = 0.405/0.405/0.405/0.000 ms 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:11:20.500 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:20.500 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:11:20.500 00:11:20.500 --- 10.0.0.2 ping statistics --- 00:11:20.500 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:20.500 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair++ )) 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # return 0 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=initiator1 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # return 1 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev= 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@169 -- # return 0 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:11:20.500 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@107 -- # local dev=target1 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@109 -- # return 1 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@168 -- # dev= 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@169 -- # return 0 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@16 -- # nvmfappstart -m 0xF 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@328 -- # nvmfpid=3139445 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # waitforlisten 3139445 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@835 -- # '[' -z 3139445 ']' 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:20.501 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.501 [2024-11-20 10:29:00.635275] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:11:20.501 [2024-11-20 10:29:00.635326] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:20.501 [2024-11-20 10:29:00.716958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:20.501 [2024-11-20 10:29:00.759404] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:20.501 [2024-11-20 10:29:00.759442] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:20.501 [2024-11-20 10:29:00.759450] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:20.501 [2024-11-20 10:29:00.759456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:20.501 [2024-11-20 10:29:00.759461] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:20.501 [2024-11-20 10:29:00.761050] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:20.501 [2024-11-20 10:29:00.761159] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:20.501 [2024-11-20 10:29:00.761241] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:20.501 [2024-11-20 10:29:00.761242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@868 -- # return 0 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.501 [2024-11-20 10:29:00.910004] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # seq 1 4 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null1 102400 512 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.501 Null1 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.501 [2024-11-20 10:29:00.963376] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null2 102400 512 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.501 Null2 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.501 10:29:00 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.501 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.501 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:11:20.501 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null3 102400 512 00:11:20.501 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.501 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.501 Null3 00:11:20.501 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.501 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:11:20.501 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.501 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.501 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.501 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # for i in $(seq 1 4) 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@22 -- # rpc_cmd bdev_null_create Null4 102400 512 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.502 Null4 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.502 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:11:20.760 00:11:20.760 Discovery Log Number of Records 6, Generation counter 6 00:11:20.760 =====Discovery Log Entry 0====== 00:11:20.760 trtype: tcp 00:11:20.760 adrfam: ipv4 00:11:20.760 subtype: current discovery subsystem 00:11:20.760 treq: not required 00:11:20.760 portid: 0 00:11:20.760 trsvcid: 4420 00:11:20.760 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:20.760 traddr: 10.0.0.2 00:11:20.760 eflags: explicit discovery connections, duplicate discovery information 00:11:20.760 sectype: none 00:11:20.760 =====Discovery Log Entry 1====== 00:11:20.760 trtype: tcp 00:11:20.760 adrfam: ipv4 00:11:20.760 subtype: nvme subsystem 00:11:20.760 treq: not required 00:11:20.760 portid: 0 00:11:20.760 trsvcid: 4420 00:11:20.760 subnqn: nqn.2016-06.io.spdk:cnode1 00:11:20.760 traddr: 10.0.0.2 00:11:20.760 eflags: none 00:11:20.760 sectype: none 00:11:20.760 =====Discovery Log Entry 2====== 00:11:20.760 trtype: tcp 00:11:20.760 adrfam: ipv4 00:11:20.760 subtype: nvme subsystem 00:11:20.760 treq: not required 00:11:20.760 portid: 0 00:11:20.760 trsvcid: 4420 00:11:20.760 subnqn: nqn.2016-06.io.spdk:cnode2 00:11:20.760 traddr: 10.0.0.2 00:11:20.760 eflags: none 00:11:20.760 sectype: none 00:11:20.760 =====Discovery Log Entry 3====== 00:11:20.760 trtype: tcp 00:11:20.760 adrfam: ipv4 00:11:20.760 subtype: nvme subsystem 00:11:20.760 treq: not required 00:11:20.761 portid: 0 00:11:20.761 trsvcid: 4420 00:11:20.761 subnqn: nqn.2016-06.io.spdk:cnode3 00:11:20.761 traddr: 10.0.0.2 00:11:20.761 eflags: none 00:11:20.761 sectype: none 00:11:20.761 =====Discovery Log Entry 4====== 00:11:20.761 trtype: tcp 00:11:20.761 adrfam: ipv4 00:11:20.761 subtype: nvme subsystem 00:11:20.761 treq: not required 00:11:20.761 portid: 0 00:11:20.761 trsvcid: 4420 00:11:20.761 subnqn: nqn.2016-06.io.spdk:cnode4 00:11:20.761 traddr: 10.0.0.2 00:11:20.761 eflags: none 00:11:20.761 sectype: none 00:11:20.761 =====Discovery Log Entry 5====== 00:11:20.761 trtype: tcp 00:11:20.761 adrfam: ipv4 00:11:20.761 subtype: discovery subsystem referral 00:11:20.761 treq: not required 00:11:20.761 portid: 0 00:11:20.761 trsvcid: 4430 00:11:20.761 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:11:20.761 traddr: 10.0.0.2 00:11:20.761 eflags: none 00:11:20.761 sectype: none 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@34 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:11:20.761 Perform nvmf subsystem discovery via RPC 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_get_subsystems 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.761 [ 00:11:20.761 { 00:11:20.761 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:11:20.761 "subtype": "Discovery", 00:11:20.761 "listen_addresses": [ 00:11:20.761 { 00:11:20.761 "trtype": "TCP", 00:11:20.761 "adrfam": "IPv4", 00:11:20.761 "traddr": "10.0.0.2", 00:11:20.761 "trsvcid": "4420" 00:11:20.761 } 00:11:20.761 ], 00:11:20.761 "allow_any_host": true, 00:11:20.761 "hosts": [] 00:11:20.761 }, 00:11:20.761 { 00:11:20.761 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:11:20.761 "subtype": "NVMe", 00:11:20.761 "listen_addresses": [ 00:11:20.761 { 00:11:20.761 "trtype": "TCP", 00:11:20.761 "adrfam": "IPv4", 00:11:20.761 "traddr": "10.0.0.2", 00:11:20.761 "trsvcid": "4420" 00:11:20.761 } 00:11:20.761 ], 00:11:20.761 "allow_any_host": true, 00:11:20.761 "hosts": [], 00:11:20.761 "serial_number": "SPDK00000000000001", 00:11:20.761 "model_number": "SPDK bdev Controller", 00:11:20.761 "max_namespaces": 32, 00:11:20.761 "min_cntlid": 1, 00:11:20.761 "max_cntlid": 65519, 00:11:20.761 "namespaces": [ 00:11:20.761 { 00:11:20.761 "nsid": 1, 00:11:20.761 "bdev_name": "Null1", 00:11:20.761 "name": "Null1", 00:11:20.761 "nguid": "9202E891E6204DEB983ADA5E7A9ED500", 00:11:20.761 "uuid": "9202e891-e620-4deb-983a-da5e7a9ed500" 00:11:20.761 } 00:11:20.761 ] 00:11:20.761 }, 00:11:20.761 { 00:11:20.761 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:11:20.761 "subtype": "NVMe", 00:11:20.761 "listen_addresses": [ 00:11:20.761 { 00:11:20.761 "trtype": "TCP", 00:11:20.761 "adrfam": "IPv4", 00:11:20.761 "traddr": "10.0.0.2", 00:11:20.761 "trsvcid": "4420" 00:11:20.761 } 00:11:20.761 ], 00:11:20.761 "allow_any_host": true, 00:11:20.761 "hosts": [], 00:11:20.761 "serial_number": "SPDK00000000000002", 00:11:20.761 "model_number": "SPDK bdev Controller", 00:11:20.761 "max_namespaces": 32, 00:11:20.761 "min_cntlid": 1, 00:11:20.761 "max_cntlid": 65519, 00:11:20.761 "namespaces": [ 00:11:20.761 { 00:11:20.761 "nsid": 1, 00:11:20.761 "bdev_name": "Null2", 00:11:20.761 "name": "Null2", 00:11:20.761 "nguid": "B91BE7A587C04EE5BD317235F8BB6C28", 00:11:20.761 "uuid": "b91be7a5-87c0-4ee5-bd31-7235f8bb6c28" 00:11:20.761 } 00:11:20.761 ] 00:11:20.761 }, 00:11:20.761 { 00:11:20.761 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:11:20.761 "subtype": "NVMe", 00:11:20.761 "listen_addresses": [ 00:11:20.761 { 00:11:20.761 "trtype": "TCP", 00:11:20.761 "adrfam": "IPv4", 00:11:20.761 "traddr": "10.0.0.2", 00:11:20.761 "trsvcid": "4420" 00:11:20.761 } 00:11:20.761 ], 00:11:20.761 "allow_any_host": true, 00:11:20.761 "hosts": [], 00:11:20.761 "serial_number": "SPDK00000000000003", 00:11:20.761 "model_number": "SPDK bdev Controller", 00:11:20.761 "max_namespaces": 32, 00:11:20.761 "min_cntlid": 1, 00:11:20.761 "max_cntlid": 65519, 00:11:20.761 "namespaces": [ 00:11:20.761 { 00:11:20.761 "nsid": 1, 00:11:20.761 "bdev_name": "Null3", 00:11:20.761 "name": "Null3", 00:11:20.761 "nguid": "1D9352BD5E6644D1B6601878FE0D8D80", 00:11:20.761 "uuid": "1d9352bd-5e66-44d1-b660-1878fe0d8d80" 00:11:20.761 } 00:11:20.761 ] 00:11:20.761 }, 00:11:20.761 { 00:11:20.761 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:11:20.761 "subtype": "NVMe", 00:11:20.761 "listen_addresses": [ 00:11:20.761 { 00:11:20.761 "trtype": "TCP", 00:11:20.761 "adrfam": "IPv4", 00:11:20.761 "traddr": "10.0.0.2", 00:11:20.761 "trsvcid": "4420" 00:11:20.761 } 00:11:20.761 ], 00:11:20.761 "allow_any_host": true, 00:11:20.761 "hosts": [], 00:11:20.761 "serial_number": "SPDK00000000000004", 00:11:20.761 "model_number": "SPDK bdev Controller", 00:11:20.761 "max_namespaces": 32, 00:11:20.761 "min_cntlid": 1, 00:11:20.761 "max_cntlid": 65519, 00:11:20.761 "namespaces": [ 00:11:20.761 { 00:11:20.761 "nsid": 1, 00:11:20.761 "bdev_name": "Null4", 00:11:20.761 "name": "Null4", 00:11:20.761 "nguid": "DB80E27EDE72471F9FA72D5AABB570F3", 00:11:20.761 "uuid": "db80e27e-de72-471f-9fa7-2d5aabb570f3" 00:11:20.761 } 00:11:20.761 ] 00:11:20.761 } 00:11:20.761 ] 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # seq 1 4 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null1 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null2 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null3 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # for i in $(seq 1 4) 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # rpc_cmd bdev_null_delete Null4 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:11:20.761 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.762 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.762 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.762 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_get_bdevs 00:11:20.762 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # jq -r '.[].name' 00:11:20.762 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:20.762 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:20.762 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:20.762 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # check_bdevs= 00:11:20.762 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@45 -- # '[' -n '' ']' 00:11:20.762 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:11:20.762 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@52 -- # nvmftestfini 00:11:20.762 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:20.762 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@99 -- # sync 00:11:20.762 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:20.762 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@102 -- # set +e 00:11:20.762 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:20.762 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:20.762 rmmod nvme_tcp 00:11:20.762 rmmod nvme_fabrics 00:11:21.021 rmmod nvme_keyring 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@106 -- # set -e 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@107 -- # return 0 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@336 -- # '[' -n 3139445 ']' 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@337 -- # killprocess 3139445 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # '[' -z 3139445 ']' 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # kill -0 3139445 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # uname 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3139445 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3139445' 00:11:21.021 killing process with pid 3139445 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@973 -- # kill 3139445 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@978 -- # wait 3139445 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@264 -- # local dev 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@267 -- # remove_target_ns 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:21.021 10:29:01 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@268 -- # delete_main_bridge 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@130 -- # return 0 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/setup.sh@284 -- # iptr 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@542 -- # iptables-save 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@542 -- # iptables-restore 00:11:23.552 00:11:23.552 real 0m9.627s 00:11:23.552 user 0m5.812s 00:11:23.552 sys 0m5.018s 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:11:23.552 ************************************ 00:11:23.552 END TEST nvmf_target_discovery 00:11:23.552 ************************************ 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:23.552 ************************************ 00:11:23.552 START TEST nvmf_referrals 00:11:23.552 ************************************ 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:11:23.552 * Looking for test storage... 00:11:23.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lcov --version 00:11:23.552 10:29:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # IFS=.-: 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@336 -- # read -ra ver1 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # IFS=.-: 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@337 -- # read -ra ver2 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@338 -- # local 'op=<' 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@340 -- # ver1_l=2 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@341 -- # ver2_l=1 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@344 -- # case "$op" in 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@345 -- # : 1 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # decimal 1 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=1 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 1 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@365 -- # ver1[v]=1 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # decimal 2 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@353 -- # local d=2 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@355 -- # echo 2 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@366 -- # ver2[v]=2 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:23.552 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@368 -- # return 0 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:23.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.553 --rc genhtml_branch_coverage=1 00:11:23.553 --rc genhtml_function_coverage=1 00:11:23.553 --rc genhtml_legend=1 00:11:23.553 --rc geninfo_all_blocks=1 00:11:23.553 --rc geninfo_unexecuted_blocks=1 00:11:23.553 00:11:23.553 ' 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:23.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.553 --rc genhtml_branch_coverage=1 00:11:23.553 --rc genhtml_function_coverage=1 00:11:23.553 --rc genhtml_legend=1 00:11:23.553 --rc geninfo_all_blocks=1 00:11:23.553 --rc geninfo_unexecuted_blocks=1 00:11:23.553 00:11:23.553 ' 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:23.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.553 --rc genhtml_branch_coverage=1 00:11:23.553 --rc genhtml_function_coverage=1 00:11:23.553 --rc genhtml_legend=1 00:11:23.553 --rc geninfo_all_blocks=1 00:11:23.553 --rc geninfo_unexecuted_blocks=1 00:11:23.553 00:11:23.553 ' 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:23.553 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:23.553 --rc genhtml_branch_coverage=1 00:11:23.553 --rc genhtml_function_coverage=1 00:11:23.553 --rc genhtml_legend=1 00:11:23.553 --rc geninfo_all_blocks=1 00:11:23.553 --rc geninfo_unexecuted_blocks=1 00:11:23.553 00:11:23.553 ' 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@15 -- # shopt -s extglob 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@50 -- # : 0 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:23.553 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # remove_target_ns 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # xtrace_disable 00:11:23.553 10:29:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@131 -- # pci_devs=() 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@135 -- # net_devs=() 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@136 -- # e810=() 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@136 -- # local -ga e810 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@137 -- # x722=() 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@137 -- # local -ga x722 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@138 -- # mlx=() 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@138 -- # local -ga mlx 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:30.120 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:30.120 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:30.120 Found net devices under 0000:86:00.0: cvl_0_0 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:30.120 Found net devices under 0000:86:00.1: cvl_0_1 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@262 -- # is_hw=yes 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@257 -- # create_target_ns 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:30.120 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@28 -- # local -g _dev 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # ips=() 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772161 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:30.121 10.0.0.1 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@11 -- # local val=167772162 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:30.121 10.0.0.2 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:30.121 10:29:09 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:30.121 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:30.121 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.440 ms 00:11:30.121 00:11:30.121 --- 10.0.0.1 ping statistics --- 00:11:30.121 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.121 rtt min/avg/max/mdev = 0.440/0.440/0.440/0.000 ms 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:30.121 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=target0 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:11:30.122 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:30.122 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.214 ms 00:11:30.122 00:11:30.122 --- 10.0.0.2 ping statistics --- 00:11:30.122 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:30.122 rtt min/avg/max/mdev = 0.214/0.214/0.214/0.000 ms 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair++ )) 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # return 0 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=initiator1 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # return 1 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev= 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@169 -- # return 0 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=target0 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@107 -- # local dev=target1 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@109 -- # return 1 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@168 -- # dev= 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@169 -- # return 0 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@328 -- # nvmfpid=3143209 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # waitforlisten 3143209 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@835 -- # '[' -z 3143209 ']' 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:30.122 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:30.122 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.122 [2024-11-20 10:29:10.271413] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:11:30.123 [2024-11-20 10:29:10.271459] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:30.123 [2024-11-20 10:29:10.332762] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:30.123 [2024-11-20 10:29:10.375889] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:30.123 [2024-11-20 10:29:10.375923] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:30.123 [2024-11-20 10:29:10.375930] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:30.123 [2024-11-20 10:29:10.375936] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:30.123 [2024-11-20 10:29:10.375942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:30.123 [2024-11-20 10:29:10.377493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:30.123 [2024-11-20 10:29:10.377600] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:30.123 [2024-11-20 10:29:10.377706] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:30.123 [2024-11-20 10:29:10.377707] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@868 -- # return 0 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.123 [2024-11-20 10:29:10.514054] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.123 [2024-11-20 10:29:10.527352] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.123 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.382 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:11:30.382 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:11:30.382 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:30.382 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:30.382 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:30.382 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:30.382 10:29:10 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:30.382 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:30.382 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:11:30.382 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:11:30.382 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.382 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.382 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.382 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:30.382 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.382 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.382 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.382 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:11:30.382 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:30.382 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:30.382 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:30.382 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:30.382 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:30.382 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:30.641 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:30.641 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:11:30.641 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:30.641 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:11:30.641 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:30.641 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:30.641 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:30.641 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:30.641 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:30.641 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:11:30.641 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:11:30.641 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:11:30.641 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:11:30.641 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:30.641 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:30.641 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:30.900 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:11:30.900 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:11:30.900 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:11:30.900 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:30.900 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:30.900 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:31.158 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:31.158 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:11:31.158 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.158 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.158 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.158 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:11:31.158 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:11:31.158 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.158 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:11:31.159 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.159 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:11:31.159 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.159 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.159 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:11:31.159 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:31.159 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:11:31.159 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.159 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.159 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.159 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.159 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:31.159 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:11:31.159 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:11:31.419 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:11:31.419 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:11:31.419 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:11:31.419 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.419 10:29:11 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:11:31.419 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:11:31.419 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:11:31.419 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:11:31.419 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:11:31.419 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.419 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:11:31.678 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:11:31.678 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:11:31.678 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.678 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.678 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.678 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:11:31.678 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:11:31.678 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:31.678 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:31.678 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:31.678 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:11:31.678 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:11:31.678 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:11:31.678 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:11:31.678 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:11:31.678 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:11:31.678 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@99 -- # sync 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@102 -- # set +e 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:31.936 rmmod nvme_tcp 00:11:31.936 rmmod nvme_fabrics 00:11:31.936 rmmod nvme_keyring 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@106 -- # set -e 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@107 -- # return 0 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@336 -- # '[' -n 3143209 ']' 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@337 -- # killprocess 3143209 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # '[' -z 3143209 ']' 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # kill -0 3143209 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # uname 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3143209 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3143209' 00:11:31.936 killing process with pid 3143209 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@973 -- # kill 3143209 00:11:31.936 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@978 -- # wait 3143209 00:11:32.195 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:32.195 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # nvmf_fini 00:11:32.195 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@264 -- # local dev 00:11:32.195 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@267 -- # remove_target_ns 00:11:32.195 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:32.195 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:32.195 10:29:12 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@268 -- # delete_main_bridge 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@130 -- # return 0 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # _dev=0 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@41 -- # dev_map=() 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/setup.sh@284 -- # iptr 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@542 -- # iptables-save 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@542 -- # iptables-restore 00:11:34.125 00:11:34.125 real 0m10.947s 00:11:34.125 user 0m12.117s 00:11:34.125 sys 0m5.308s 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:34.125 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:11:34.125 ************************************ 00:11:34.125 END TEST nvmf_referrals 00:11:34.125 ************************************ 00:11:34.384 10:29:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:34.384 10:29:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:11:34.384 10:29:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:34.384 10:29:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:34.384 ************************************ 00:11:34.384 START TEST nvmf_connect_disconnect 00:11:34.384 ************************************ 00:11:34.384 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:11:34.384 * Looking for test storage... 00:11:34.384 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:34.384 10:29:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@345 -- # : 1 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # decimal 1 00:11:34.384 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=1 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 1 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # decimal 2 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@353 -- # local d=2 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@355 -- # echo 2 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@368 -- # return 0 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:11:34.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.385 --rc genhtml_branch_coverage=1 00:11:34.385 --rc genhtml_function_coverage=1 00:11:34.385 --rc genhtml_legend=1 00:11:34.385 --rc geninfo_all_blocks=1 00:11:34.385 --rc geninfo_unexecuted_blocks=1 00:11:34.385 00:11:34.385 ' 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:11:34.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.385 --rc genhtml_branch_coverage=1 00:11:34.385 --rc genhtml_function_coverage=1 00:11:34.385 --rc genhtml_legend=1 00:11:34.385 --rc geninfo_all_blocks=1 00:11:34.385 --rc geninfo_unexecuted_blocks=1 00:11:34.385 00:11:34.385 ' 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:11:34.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.385 --rc genhtml_branch_coverage=1 00:11:34.385 --rc genhtml_function_coverage=1 00:11:34.385 --rc genhtml_legend=1 00:11:34.385 --rc geninfo_all_blocks=1 00:11:34.385 --rc geninfo_unexecuted_blocks=1 00:11:34.385 00:11:34.385 ' 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:11:34.385 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:34.385 --rc genhtml_branch_coverage=1 00:11:34.385 --rc genhtml_function_coverage=1 00:11:34.385 --rc genhtml_legend=1 00:11:34.385 --rc geninfo_all_blocks=1 00:11:34.385 --rc geninfo_unexecuted_blocks=1 00:11:34.385 00:11:34.385 ' 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:34.385 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@50 -- # : 0 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:11:34.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # xtrace_disable 00:11:34.644 10:29:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@131 -- # pci_devs=() 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@131 -- # local -a pci_devs 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@132 -- # pci_net_devs=() 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@133 -- # pci_drivers=() 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@133 -- # local -A pci_drivers 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@135 -- # net_devs=() 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@135 -- # local -ga net_devs 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@136 -- # e810=() 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@136 -- # local -ga e810 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@137 -- # x722=() 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@137 -- # local -ga x722 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@138 -- # mlx=() 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@138 -- # local -ga mlx 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:41.209 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:41.209 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:41.209 Found net devices under 0000:86:00.0: cvl_0_0 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:41.209 Found net devices under 0000:86:00.1: cvl_0_1 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@262 -- # is_hw=yes 00:11:41.209 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@257 -- # create_target_ns 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:11:41.210 10.0.0.1 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:11:41.210 10.0.0.2 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:11:41.210 10:29:20 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@38 -- # ping_ips 1 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:41.210 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:11:41.211 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:41.211 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.433 ms 00:11:41.211 00:11:41.211 --- 10.0.0.1 ping statistics --- 00:11:41.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.211 rtt min/avg/max/mdev = 0.433/0.433/0.433/0.000 ms 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:11:41.211 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:41.211 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.201 ms 00:11:41.211 00:11:41.211 --- 10.0.0.2 ping statistics --- 00:11:41.211 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:41.211 rtt min/avg/max/mdev = 0.201/0.201/0.201/0.000 ms 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair++ )) 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # return 0 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator0 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # return 1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev= 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@169 -- # return 0 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target1 00:11:41.211 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@107 -- # local dev=target1 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@109 -- # return 1 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@168 -- # dev= 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@169 -- # return 0 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@726 -- # xtrace_disable 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@328 -- # nvmfpid=3147179 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # waitforlisten 3147179 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@835 -- # '[' -z 3147179 ']' 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:41.212 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.212 [2024-11-20 10:29:21.276669] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:11:41.212 [2024-11-20 10:29:21.276722] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:41.212 [2024-11-20 10:29:21.356576] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:41.212 [2024-11-20 10:29:21.399199] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:41.212 [2024-11-20 10:29:21.399242] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:41.212 [2024-11-20 10:29:21.399249] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:41.212 [2024-11-20 10:29:21.399255] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:41.212 [2024-11-20 10:29:21.399261] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:41.212 [2024-11-20 10:29:21.400777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:11:41.212 [2024-11-20 10:29:21.400885] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:11:41.212 [2024-11-20 10:29:21.400998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:11:41.212 [2024-11-20 10:29:21.400999] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@868 -- # return 0 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@732 -- # xtrace_disable 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.212 [2024-11-20 10:29:21.537794] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:11:41.212 [2024-11-20 10:29:21.605010] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:11:41.212 10:29:21 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:11:44.490 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.774 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:51.054 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:54.336 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.618 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:57.618 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:11:57.618 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:11:57.618 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:11:57.618 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@99 -- # sync 00:11:57.618 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:11:57.618 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@102 -- # set +e 00:11:57.618 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:11:57.619 10:29:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:11:57.619 rmmod nvme_tcp 00:11:57.619 rmmod nvme_fabrics 00:11:57.619 rmmod nvme_keyring 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@106 -- # set -e 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@107 -- # return 0 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@336 -- # '[' -n 3147179 ']' 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@337 -- # killprocess 3147179 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3147179 ']' 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # kill -0 3147179 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # uname 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3147179 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3147179' 00:11:57.619 killing process with pid 3147179 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@973 -- # kill 3147179 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@978 -- # wait 3147179 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@264 -- # local dev 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@267 -- # remove_target_ns 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:11:57.619 10:29:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@130 -- # return 0 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/setup.sh@284 -- # iptr 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@542 -- # iptables-save 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@542 -- # iptables-restore 00:12:00.154 00:12:00.154 real 0m25.427s 00:12:00.154 user 1m8.678s 00:12:00.154 sys 0m5.935s 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:00.154 ************************************ 00:12:00.154 END TEST nvmf_connect_disconnect 00:12:00.154 ************************************ 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:00.154 ************************************ 00:12:00.154 START TEST nvmf_multitarget 00:12:00.154 ************************************ 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:00.154 * Looking for test storage... 00:12:00.154 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lcov --version 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # IFS=.-: 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@336 -- # read -ra ver1 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # IFS=.-: 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@337 -- # read -ra ver2 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@338 -- # local 'op=<' 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@340 -- # ver1_l=2 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@341 -- # ver2_l=1 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@344 -- # case "$op" in 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@345 -- # : 1 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # decimal 1 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=1 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 1 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@365 -- # ver1[v]=1 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # decimal 2 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@353 -- # local d=2 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@355 -- # echo 2 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@366 -- # ver2[v]=2 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@368 -- # return 0 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:00.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.154 --rc genhtml_branch_coverage=1 00:12:00.154 --rc genhtml_function_coverage=1 00:12:00.154 --rc genhtml_legend=1 00:12:00.154 --rc geninfo_all_blocks=1 00:12:00.154 --rc geninfo_unexecuted_blocks=1 00:12:00.154 00:12:00.154 ' 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:00.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.154 --rc genhtml_branch_coverage=1 00:12:00.154 --rc genhtml_function_coverage=1 00:12:00.154 --rc genhtml_legend=1 00:12:00.154 --rc geninfo_all_blocks=1 00:12:00.154 --rc geninfo_unexecuted_blocks=1 00:12:00.154 00:12:00.154 ' 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:00.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.154 --rc genhtml_branch_coverage=1 00:12:00.154 --rc genhtml_function_coverage=1 00:12:00.154 --rc genhtml_legend=1 00:12:00.154 --rc geninfo_all_blocks=1 00:12:00.154 --rc geninfo_unexecuted_blocks=1 00:12:00.154 00:12:00.154 ' 00:12:00.154 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:00.154 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:00.154 --rc genhtml_branch_coverage=1 00:12:00.154 --rc genhtml_function_coverage=1 00:12:00.154 --rc genhtml_legend=1 00:12:00.154 --rc geninfo_all_blocks=1 00:12:00.154 --rc geninfo_unexecuted_blocks=1 00:12:00.155 00:12:00.155 ' 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@15 -- # shopt -s extglob 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@50 -- # : 0 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:00.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # remove_target_ns 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # xtrace_disable 00:12:00.155 10:29:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@131 -- # pci_devs=() 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@135 -- # net_devs=() 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@136 -- # e810=() 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@136 -- # local -ga e810 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@137 -- # x722=() 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@137 -- # local -ga x722 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@138 -- # mlx=() 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@138 -- # local -ga mlx 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:06.726 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:06.727 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:06.727 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:06.727 Found net devices under 0000:86:00.0: cvl_0_0 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:06.727 Found net devices under 0000:86:00.1: cvl_0_1 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@262 -- # is_hw=yes 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@257 -- # create_target_ns 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@28 -- # local -g _dev 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # ips=() 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772161 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:12:06.727 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:06.727 10.0.0.1 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@11 -- # local val=167772162 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:06.728 10.0.0.2 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:06.728 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:06.728 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.455 ms 00:12:06.728 00:12:06.728 --- 10.0.0.1 ping statistics --- 00:12:06.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.728 rtt min/avg/max/mdev = 0.455/0.455/0.455/0.000 ms 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=target0 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:06.728 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:06.728 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:12:06.728 00:12:06.728 --- 10.0.0.2 ping statistics --- 00:12:06.728 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:06.728 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # return 0 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:06.728 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=initiator1 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # return 1 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev= 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@169 -- # return 0 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=target0 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@107 -- # local dev=target1 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@109 -- # return 1 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@168 -- # dev= 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@169 -- # return 0 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@328 -- # nvmfpid=3153703 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # waitforlisten 3153703 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@835 -- # '[' -z 3153703 ']' 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:06.729 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:06.729 10:29:46 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:06.729 [2024-11-20 10:29:46.798393] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:12:06.729 [2024-11-20 10:29:46.798446] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:06.729 [2024-11-20 10:29:46.877306] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:06.729 [2024-11-20 10:29:46.917272] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:06.729 [2024-11-20 10:29:46.917312] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:06.729 [2024-11-20 10:29:46.917321] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:06.729 [2024-11-20 10:29:46.917327] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:06.729 [2024-11-20 10:29:46.917333] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:06.729 [2024-11-20 10:29:46.918755] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:06.729 [2024-11-20 10:29:46.918866] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:06.729 [2024-11-20 10:29:46.918972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:06.729 [2024-11-20 10:29:46.918972] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:06.729 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:06.729 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@868 -- # return 0 00:12:06.729 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:06.729 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:06.729 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:06.729 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:06.729 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:06.729 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:06.729 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:06.729 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:06.729 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:06.729 "nvmf_tgt_1" 00:12:06.729 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:06.729 "nvmf_tgt_2" 00:12:06.729 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:06.729 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:06.988 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:06.988 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:06.988 true 00:12:06.988 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:06.988 true 00:12:06.988 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:06.988 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@99 -- # sync 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@102 -- # set +e 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:07.247 rmmod nvme_tcp 00:12:07.247 rmmod nvme_fabrics 00:12:07.247 rmmod nvme_keyring 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@106 -- # set -e 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@107 -- # return 0 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@336 -- # '[' -n 3153703 ']' 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@337 -- # killprocess 3153703 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # '[' -z 3153703 ']' 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # kill -0 3153703 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # uname 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3153703 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3153703' 00:12:07.247 killing process with pid 3153703 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@973 -- # kill 3153703 00:12:07.247 10:29:47 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@978 -- # wait 3153703 00:12:07.506 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:07.506 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # nvmf_fini 00:12:07.506 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@264 -- # local dev 00:12:07.506 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@267 -- # remove_target_ns 00:12:07.506 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:07.506 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:07.506 10:29:48 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:09.441 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:09.441 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:09.441 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@130 -- # return 0 00:12:09.441 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:09.441 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:09.441 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:09.441 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:12:09.441 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:12:09.441 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:09.441 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:12:09.441 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:12:09.441 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:09.441 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:09.441 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:09.441 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:12:09.441 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:12:09.441 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:09.441 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:12:09.441 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # _dev=0 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@41 -- # dev_map=() 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/setup.sh@284 -- # iptr 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@542 -- # iptables-save 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@542 -- # iptables-restore 00:12:09.735 00:12:09.735 real 0m9.764s 00:12:09.735 user 0m7.315s 00:12:09.735 sys 0m4.942s 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:09.735 ************************************ 00:12:09.735 END TEST nvmf_multitarget 00:12:09.735 ************************************ 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:09.735 ************************************ 00:12:09.735 START TEST nvmf_rpc 00:12:09.735 ************************************ 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:09.735 * Looking for test storage... 00:12:09.735 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lcov --version 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@344 -- # case "$op" in 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@345 -- # : 1 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:09.735 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # decimal 1 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=1 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 1 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # decimal 2 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@353 -- # local d=2 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@355 -- # echo 2 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@368 -- # return 0 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:09.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.736 --rc genhtml_branch_coverage=1 00:12:09.736 --rc genhtml_function_coverage=1 00:12:09.736 --rc genhtml_legend=1 00:12:09.736 --rc geninfo_all_blocks=1 00:12:09.736 --rc geninfo_unexecuted_blocks=1 00:12:09.736 00:12:09.736 ' 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:09.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.736 --rc genhtml_branch_coverage=1 00:12:09.736 --rc genhtml_function_coverage=1 00:12:09.736 --rc genhtml_legend=1 00:12:09.736 --rc geninfo_all_blocks=1 00:12:09.736 --rc geninfo_unexecuted_blocks=1 00:12:09.736 00:12:09.736 ' 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:09.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.736 --rc genhtml_branch_coverage=1 00:12:09.736 --rc genhtml_function_coverage=1 00:12:09.736 --rc genhtml_legend=1 00:12:09.736 --rc geninfo_all_blocks=1 00:12:09.736 --rc geninfo_unexecuted_blocks=1 00:12:09.736 00:12:09.736 ' 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:09.736 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:09.736 --rc genhtml_branch_coverage=1 00:12:09.736 --rc genhtml_function_coverage=1 00:12:09.736 --rc genhtml_legend=1 00:12:09.736 --rc geninfo_all_blocks=1 00:12:09.736 --rc geninfo_unexecuted_blocks=1 00:12:09.736 00:12:09.736 ' 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@15 -- # shopt -s extglob 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@50 -- # : 0 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:09.736 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:09.736 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:09.996 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:09.996 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:09.996 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:09.996 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:09.996 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:09.996 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:09.996 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # remove_target_ns 00:12:09.996 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:09.996 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:09.996 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:09.996 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:09.996 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:09.996 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # xtrace_disable 00:12:09.996 10:29:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@131 -- # pci_devs=() 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@135 -- # net_devs=() 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@136 -- # e810=() 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@136 -- # local -ga e810 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@137 -- # x722=() 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@137 -- # local -ga x722 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@138 -- # mlx=() 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@138 -- # local -ga mlx 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:16.563 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:16.563 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:16.563 Found net devices under 0000:86:00.0: cvl_0_0 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:16.563 Found net devices under 0000:86:00.1: cvl_0_1 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@262 -- # is_hw=yes 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@257 -- # create_target_ns 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@28 -- # local -g _dev 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # ips=() 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:16.563 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772161 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:16.564 10.0.0.1 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@11 -- # local val=167772162 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:16.564 10.0.0.2 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:16.564 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:16.564 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.297 ms 00:12:16.564 00:12:16.564 --- 10.0.0.1 ping statistics --- 00:12:16.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.564 rtt min/avg/max/mdev = 0.297/0.297/0.297/0.000 ms 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=target0 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:16.564 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:16.564 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:16.564 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.143 ms 00:12:16.564 00:12:16.564 --- 10.0.0.2 ping statistics --- 00:12:16.564 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:16.565 rtt min/avg/max/mdev = 0.143/0.143/0.143/0.000 ms 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # return 0 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=initiator1 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # return 1 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev= 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@169 -- # return 0 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=target0 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@107 -- # local dev=target1 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@109 -- # return 1 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@168 -- # dev= 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@169 -- # return 0 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@328 -- # nvmfpid=3157461 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # waitforlisten 3157461 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@835 -- # '[' -z 3157461 ']' 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:16.565 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.565 [2024-11-20 10:29:56.635473] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:12:16.565 [2024-11-20 10:29:56.635530] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:16.565 [2024-11-20 10:29:56.718234] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:16.565 [2024-11-20 10:29:56.759885] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:16.565 [2024-11-20 10:29:56.759924] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:16.565 [2024-11-20 10:29:56.759932] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:16.565 [2024-11-20 10:29:56.759938] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:16.565 [2024-11-20 10:29:56.759942] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:16.565 [2024-11-20 10:29:56.761400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:16.565 [2024-11-20 10:29:56.761507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:16.565 [2024-11-20 10:29:56.761611] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:16.565 [2024-11-20 10:29:56.761612] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.565 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:12:16.565 "tick_rate": 2100000000, 00:12:16.566 "poll_groups": [ 00:12:16.566 { 00:12:16.566 "name": "nvmf_tgt_poll_group_000", 00:12:16.566 "admin_qpairs": 0, 00:12:16.566 "io_qpairs": 0, 00:12:16.566 "current_admin_qpairs": 0, 00:12:16.566 "current_io_qpairs": 0, 00:12:16.566 "pending_bdev_io": 0, 00:12:16.566 "completed_nvme_io": 0, 00:12:16.566 "transports": [] 00:12:16.566 }, 00:12:16.566 { 00:12:16.566 "name": "nvmf_tgt_poll_group_001", 00:12:16.566 "admin_qpairs": 0, 00:12:16.566 "io_qpairs": 0, 00:12:16.566 "current_admin_qpairs": 0, 00:12:16.566 "current_io_qpairs": 0, 00:12:16.566 "pending_bdev_io": 0, 00:12:16.566 "completed_nvme_io": 0, 00:12:16.566 "transports": [] 00:12:16.566 }, 00:12:16.566 { 00:12:16.566 "name": "nvmf_tgt_poll_group_002", 00:12:16.566 "admin_qpairs": 0, 00:12:16.566 "io_qpairs": 0, 00:12:16.566 "current_admin_qpairs": 0, 00:12:16.566 "current_io_qpairs": 0, 00:12:16.566 "pending_bdev_io": 0, 00:12:16.566 "completed_nvme_io": 0, 00:12:16.566 "transports": [] 00:12:16.566 }, 00:12:16.566 { 00:12:16.566 "name": "nvmf_tgt_poll_group_003", 00:12:16.566 "admin_qpairs": 0, 00:12:16.566 "io_qpairs": 0, 00:12:16.566 "current_admin_qpairs": 0, 00:12:16.566 "current_io_qpairs": 0, 00:12:16.566 "pending_bdev_io": 0, 00:12:16.566 "completed_nvme_io": 0, 00:12:16.566 "transports": [] 00:12:16.566 } 00:12:16.566 ] 00:12:16.566 }' 00:12:16.566 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:12:16.566 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:12:16.566 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:12:16.566 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:12:16.566 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:12:16.566 10:29:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.566 [2024-11-20 10:29:57.014645] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:12:16.566 "tick_rate": 2100000000, 00:12:16.566 "poll_groups": [ 00:12:16.566 { 00:12:16.566 "name": "nvmf_tgt_poll_group_000", 00:12:16.566 "admin_qpairs": 0, 00:12:16.566 "io_qpairs": 0, 00:12:16.566 "current_admin_qpairs": 0, 00:12:16.566 "current_io_qpairs": 0, 00:12:16.566 "pending_bdev_io": 0, 00:12:16.566 "completed_nvme_io": 0, 00:12:16.566 "transports": [ 00:12:16.566 { 00:12:16.566 "trtype": "TCP" 00:12:16.566 } 00:12:16.566 ] 00:12:16.566 }, 00:12:16.566 { 00:12:16.566 "name": "nvmf_tgt_poll_group_001", 00:12:16.566 "admin_qpairs": 0, 00:12:16.566 "io_qpairs": 0, 00:12:16.566 "current_admin_qpairs": 0, 00:12:16.566 "current_io_qpairs": 0, 00:12:16.566 "pending_bdev_io": 0, 00:12:16.566 "completed_nvme_io": 0, 00:12:16.566 "transports": [ 00:12:16.566 { 00:12:16.566 "trtype": "TCP" 00:12:16.566 } 00:12:16.566 ] 00:12:16.566 }, 00:12:16.566 { 00:12:16.566 "name": "nvmf_tgt_poll_group_002", 00:12:16.566 "admin_qpairs": 0, 00:12:16.566 "io_qpairs": 0, 00:12:16.566 "current_admin_qpairs": 0, 00:12:16.566 "current_io_qpairs": 0, 00:12:16.566 "pending_bdev_io": 0, 00:12:16.566 "completed_nvme_io": 0, 00:12:16.566 "transports": [ 00:12:16.566 { 00:12:16.566 "trtype": "TCP" 00:12:16.566 } 00:12:16.566 ] 00:12:16.566 }, 00:12:16.566 { 00:12:16.566 "name": "nvmf_tgt_poll_group_003", 00:12:16.566 "admin_qpairs": 0, 00:12:16.566 "io_qpairs": 0, 00:12:16.566 "current_admin_qpairs": 0, 00:12:16.566 "current_io_qpairs": 0, 00:12:16.566 "pending_bdev_io": 0, 00:12:16.566 "completed_nvme_io": 0, 00:12:16.566 "transports": [ 00:12:16.566 { 00:12:16.566 "trtype": "TCP" 00:12:16.566 } 00:12:16.566 ] 00:12:16.566 } 00:12:16.566 ] 00:12:16.566 }' 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.566 Malloc1 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.566 [2024-11-20 10:29:57.199030] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:12:16.566 [2024-11-20 10:29:57.227631] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:12:16.566 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:16.566 could not add new controller: failed to write to nvme-fabrics device 00:12:16.566 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:16.567 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:16.567 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:16.567 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:16.567 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:16.567 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:16.567 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:16.567 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:16.567 10:29:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:17.942 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:12:17.942 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:17.942 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:17.942 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:17.942 10:29:58 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:19.845 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@652 -- # local es=0 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # local arg=nvme 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # type -t nvme 00:12:19.845 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.846 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # type -P nvme 00:12:19.846 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:12:19.846 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # arg=/usr/sbin/nvme 00:12:19.846 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@646 -- # [[ -x /usr/sbin/nvme ]] 00:12:19.846 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:19.846 [2024-11-20 10:30:00.551484] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562' 00:12:20.105 Failed to write to /dev/nvme-fabrics: Input/output error 00:12:20.105 could not add new controller: failed to write to nvme-fabrics device 00:12:20.105 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@655 -- # es=1 00:12:20.105 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:12:20.105 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:12:20.105 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:12:20.105 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:12:20.105 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:20.105 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:20.105 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:20.105 10:30:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:21.040 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:12:21.040 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:21.040 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:21.040 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:21.040 10:30:01 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:23.571 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.571 [2024-11-20 10:30:03.874448] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.571 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:23.572 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.572 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.572 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.572 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:23.572 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:23.572 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:23.572 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:23.572 10:30:03 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:24.507 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:24.507 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:24.507 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:24.507 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:24.507 10:30:05 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:26.408 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:26.408 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:26.408 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:26.408 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:26.408 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:26.408 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:26.408 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:26.667 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.667 [2024-11-20 10:30:07.222941] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:26.667 10:30:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:28.043 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:28.043 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:28.043 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:28.043 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:28.043 10:30:08 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:29.956 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.956 [2024-11-20 10:30:10.633723] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.956 10:30:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:31.330 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:31.330 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:31.330 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:31.330 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:31.330 10:30:11 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:33.232 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.232 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.490 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.490 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:33.490 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:33.490 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.490 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.490 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.490 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:33.490 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.490 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.490 [2024-11-20 10:30:13.981542] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:33.490 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.490 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:33.490 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.490 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.490 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.490 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:33.490 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:33.490 10:30:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:33.490 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:33.490 10:30:14 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:34.427 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:34.427 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:34.427 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:34.427 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:34.427 10:30:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:36.957 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.957 [2024-11-20 10:30:17.294155] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:36.957 10:30:17 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:12:37.892 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:12:37.892 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1202 -- # local i=0 00:12:37.892 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:12:37.892 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:12:37.892 10:30:18 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1209 -- # sleep 2 00:12:39.792 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:12:39.792 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:12:39.792 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1212 -- # return 0 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:12:40.051 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1223 -- # local i=0 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1235 -- # return 0 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.051 [2024-11-20 10:30:20.661611] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.051 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.052 [2024-11-20 10:30:20.709713] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.052 [2024-11-20 10:30:20.757846] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.052 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.311 [2024-11-20 10:30:20.806017] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.311 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.312 [2024-11-20 10:30:20.854177] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:12:40.312 "tick_rate": 2100000000, 00:12:40.312 "poll_groups": [ 00:12:40.312 { 00:12:40.312 "name": "nvmf_tgt_poll_group_000", 00:12:40.312 "admin_qpairs": 2, 00:12:40.312 "io_qpairs": 168, 00:12:40.312 "current_admin_qpairs": 0, 00:12:40.312 "current_io_qpairs": 0, 00:12:40.312 "pending_bdev_io": 0, 00:12:40.312 "completed_nvme_io": 267, 00:12:40.312 "transports": [ 00:12:40.312 { 00:12:40.312 "trtype": "TCP" 00:12:40.312 } 00:12:40.312 ] 00:12:40.312 }, 00:12:40.312 { 00:12:40.312 "name": "nvmf_tgt_poll_group_001", 00:12:40.312 "admin_qpairs": 2, 00:12:40.312 "io_qpairs": 168, 00:12:40.312 "current_admin_qpairs": 0, 00:12:40.312 "current_io_qpairs": 0, 00:12:40.312 "pending_bdev_io": 0, 00:12:40.312 "completed_nvme_io": 267, 00:12:40.312 "transports": [ 00:12:40.312 { 00:12:40.312 "trtype": "TCP" 00:12:40.312 } 00:12:40.312 ] 00:12:40.312 }, 00:12:40.312 { 00:12:40.312 "name": "nvmf_tgt_poll_group_002", 00:12:40.312 "admin_qpairs": 1, 00:12:40.312 "io_qpairs": 168, 00:12:40.312 "current_admin_qpairs": 0, 00:12:40.312 "current_io_qpairs": 0, 00:12:40.312 "pending_bdev_io": 0, 00:12:40.312 "completed_nvme_io": 221, 00:12:40.312 "transports": [ 00:12:40.312 { 00:12:40.312 "trtype": "TCP" 00:12:40.312 } 00:12:40.312 ] 00:12:40.312 }, 00:12:40.312 { 00:12:40.312 "name": "nvmf_tgt_poll_group_003", 00:12:40.312 "admin_qpairs": 2, 00:12:40.312 "io_qpairs": 168, 00:12:40.312 "current_admin_qpairs": 0, 00:12:40.312 "current_io_qpairs": 0, 00:12:40.312 "pending_bdev_io": 0, 00:12:40.312 "completed_nvme_io": 267, 00:12:40.312 "transports": [ 00:12:40.312 { 00:12:40.312 "trtype": "TCP" 00:12:40.312 } 00:12:40.312 ] 00:12:40.312 } 00:12:40.312 ] 00:12:40.312 }' 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:40.312 10:30:20 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@99 -- # sync 00:12:40.312 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:40.312 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@102 -- # set +e 00:12:40.312 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:40.312 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:40.312 rmmod nvme_tcp 00:12:40.312 rmmod nvme_fabrics 00:12:40.572 rmmod nvme_keyring 00:12:40.572 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:40.572 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@106 -- # set -e 00:12:40.572 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@107 -- # return 0 00:12:40.572 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@336 -- # '[' -n 3157461 ']' 00:12:40.572 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@337 -- # killprocess 3157461 00:12:40.572 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # '[' -z 3157461 ']' 00:12:40.572 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # kill -0 3157461 00:12:40.572 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # uname 00:12:40.572 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:40.572 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3157461 00:12:40.572 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:40.572 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:40.572 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3157461' 00:12:40.572 killing process with pid 3157461 00:12:40.572 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@973 -- # kill 3157461 00:12:40.572 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@978 -- # wait 3157461 00:12:40.831 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:40.831 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # nvmf_fini 00:12:40.831 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@264 -- # local dev 00:12:40.831 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@267 -- # remove_target_ns 00:12:40.831 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:40.831 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:40.831 10:30:21 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@130 -- # return 0 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # _dev=0 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@41 -- # dev_map=() 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/setup.sh@284 -- # iptr 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@542 -- # iptables-save 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@542 -- # iptables-restore 00:12:42.734 00:12:42.734 real 0m33.144s 00:12:42.734 user 1m39.506s 00:12:42.734 sys 0m6.674s 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:42.734 ************************************ 00:12:42.734 END TEST nvmf_rpc 00:12:42.734 ************************************ 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:42.734 ************************************ 00:12:42.734 START TEST nvmf_invalid 00:12:42.734 ************************************ 00:12:42.734 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:12:42.994 * Looking for test storage... 00:12:42.994 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:42.994 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:42.994 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lcov --version 00:12:42.994 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:42.994 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:42.994 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:42.994 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:42.994 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:42.994 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # IFS=.-: 00:12:42.994 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@336 -- # read -ra ver1 00:12:42.994 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # IFS=.-: 00:12:42.994 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@337 -- # read -ra ver2 00:12:42.994 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@338 -- # local 'op=<' 00:12:42.994 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@340 -- # ver1_l=2 00:12:42.994 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@341 -- # ver2_l=1 00:12:42.994 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:42.994 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@344 -- # case "$op" in 00:12:42.994 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@345 -- # : 1 00:12:42.994 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # decimal 1 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=1 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 1 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@365 -- # ver1[v]=1 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # decimal 2 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@353 -- # local d=2 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@355 -- # echo 2 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@366 -- # ver2[v]=2 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@368 -- # return 0 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:42.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.995 --rc genhtml_branch_coverage=1 00:12:42.995 --rc genhtml_function_coverage=1 00:12:42.995 --rc genhtml_legend=1 00:12:42.995 --rc geninfo_all_blocks=1 00:12:42.995 --rc geninfo_unexecuted_blocks=1 00:12:42.995 00:12:42.995 ' 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:42.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.995 --rc genhtml_branch_coverage=1 00:12:42.995 --rc genhtml_function_coverage=1 00:12:42.995 --rc genhtml_legend=1 00:12:42.995 --rc geninfo_all_blocks=1 00:12:42.995 --rc geninfo_unexecuted_blocks=1 00:12:42.995 00:12:42.995 ' 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:42.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.995 --rc genhtml_branch_coverage=1 00:12:42.995 --rc genhtml_function_coverage=1 00:12:42.995 --rc genhtml_legend=1 00:12:42.995 --rc geninfo_all_blocks=1 00:12:42.995 --rc geninfo_unexecuted_blocks=1 00:12:42.995 00:12:42.995 ' 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:42.995 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:42.995 --rc genhtml_branch_coverage=1 00:12:42.995 --rc genhtml_function_coverage=1 00:12:42.995 --rc genhtml_legend=1 00:12:42.995 --rc geninfo_all_blocks=1 00:12:42.995 --rc geninfo_unexecuted_blocks=1 00:12:42.995 00:12:42.995 ' 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@15 -- # shopt -s extglob 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@50 -- # : 0 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:42.995 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # remove_target_ns 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:42.995 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:42.996 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@125 -- # xtrace_disable 00:12:42.996 10:30:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@131 -- # pci_devs=() 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@131 -- # local -a pci_devs 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@132 -- # pci_net_devs=() 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@133 -- # pci_drivers=() 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@133 -- # local -A pci_drivers 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@135 -- # net_devs=() 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@135 -- # local -ga net_devs 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@136 -- # e810=() 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@136 -- # local -ga e810 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@137 -- # x722=() 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@137 -- # local -ga x722 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@138 -- # mlx=() 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@138 -- # local -ga mlx 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:49.564 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.564 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:49.565 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:49.565 Found net devices under 0000:86:00.0: cvl_0_0 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:49.565 Found net devices under 0000:86:00.1: cvl_0_1 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@262 -- # is_hw=yes 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@257 -- # create_target_ns 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@28 -- # local -g _dev 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # ips=() 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772161 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:12:49.565 10.0.0.1 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@11 -- # local val=167772162 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:12:49.565 10.0.0.2 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:12:49.565 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@38 -- # ping_ips 1 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:12:49.566 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:49.566 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.378 ms 00:12:49.566 00:12:49.566 --- 10.0.0.1 ping statistics --- 00:12:49.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.566 rtt min/avg/max/mdev = 0.378/0.378/0.378/0.000 ms 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=target0 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:12:49.566 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:49.566 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:12:49.566 00:12:49.566 --- 10.0.0.2 ping statistics --- 00:12:49.566 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:49.566 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair++ )) 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # return 0 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=initiator0 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=initiator1 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # return 1 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev= 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@169 -- # return 0 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=target0 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:12:49.566 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # get_net_dev target1 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@107 -- # local dev=target1 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@109 -- # return 1 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@168 -- # dev= 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@169 -- # return 0 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@726 -- # xtrace_disable 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@328 -- # nvmfpid=3165717 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # waitforlisten 3165717 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@835 -- # '[' -z 3165717 ']' 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:49.567 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:49.567 10:30:29 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:49.567 [2024-11-20 10:30:29.836056] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:12:49.567 [2024-11-20 10:30:29.836098] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:49.567 [2024-11-20 10:30:29.913731] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:49.567 [2024-11-20 10:30:29.955466] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:49.567 [2024-11-20 10:30:29.955503] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:49.567 [2024-11-20 10:30:29.955510] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:49.567 [2024-11-20 10:30:29.955516] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:49.567 [2024-11-20 10:30:29.955521] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:49.567 [2024-11-20 10:30:29.957135] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:12:49.567 [2024-11-20 10:30:29.957263] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:12:49.567 [2024-11-20 10:30:29.957302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:12:49.567 [2024-11-20 10:30:29.957303] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:12:49.567 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:49.567 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@868 -- # return 0 00:12:49.567 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:12:49.567 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@732 -- # xtrace_disable 00:12:49.567 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:49.567 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:49.567 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:49.567 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode6665 00:12:49.567 [2024-11-20 10:30:30.275586] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:12:49.855 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:12:49.855 { 00:12:49.855 "nqn": "nqn.2016-06.io.spdk:cnode6665", 00:12:49.855 "tgt_name": "foobar", 00:12:49.855 "method": "nvmf_create_subsystem", 00:12:49.855 "req_id": 1 00:12:49.855 } 00:12:49.855 Got JSON-RPC error response 00:12:49.855 response: 00:12:49.855 { 00:12:49.855 "code": -32603, 00:12:49.855 "message": "Unable to find target foobar" 00:12:49.855 }' 00:12:49.855 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:12:49.855 { 00:12:49.855 "nqn": "nqn.2016-06.io.spdk:cnode6665", 00:12:49.855 "tgt_name": "foobar", 00:12:49.855 "method": "nvmf_create_subsystem", 00:12:49.855 "req_id": 1 00:12:49.855 } 00:12:49.855 Got JSON-RPC error response 00:12:49.855 response: 00:12:49.855 { 00:12:49.855 "code": -32603, 00:12:49.855 "message": "Unable to find target foobar" 00:12:49.855 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:12:49.855 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:12:49.855 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode29887 00:12:49.855 [2024-11-20 10:30:30.492340] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29887: invalid serial number 'SPDKISFASTANDAWESOME' 00:12:49.855 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:12:49.855 { 00:12:49.855 "nqn": "nqn.2016-06.io.spdk:cnode29887", 00:12:49.855 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:49.855 "method": "nvmf_create_subsystem", 00:12:49.855 "req_id": 1 00:12:49.855 } 00:12:49.855 Got JSON-RPC error response 00:12:49.855 response: 00:12:49.855 { 00:12:49.855 "code": -32602, 00:12:49.855 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:49.855 }' 00:12:49.855 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:12:49.855 { 00:12:49.855 "nqn": "nqn.2016-06.io.spdk:cnode29887", 00:12:49.855 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:12:49.855 "method": "nvmf_create_subsystem", 00:12:49.855 "req_id": 1 00:12:49.855 } 00:12:49.855 Got JSON-RPC error response 00:12:49.855 response: 00:12:49.855 { 00:12:49.855 "code": -32602, 00:12:49.855 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:12:49.855 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:49.855 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:12:49.855 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode15783 00:12:50.114 [2024-11-20 10:30:30.693020] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15783: invalid model number 'SPDK_Controller' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:12:50.115 { 00:12:50.115 "nqn": "nqn.2016-06.io.spdk:cnode15783", 00:12:50.115 "model_number": "SPDK_Controller\u001f", 00:12:50.115 "method": "nvmf_create_subsystem", 00:12:50.115 "req_id": 1 00:12:50.115 } 00:12:50.115 Got JSON-RPC error response 00:12:50.115 response: 00:12:50.115 { 00:12:50.115 "code": -32602, 00:12:50.115 "message": "Invalid MN SPDK_Controller\u001f" 00:12:50.115 }' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:12:50.115 { 00:12:50.115 "nqn": "nqn.2016-06.io.spdk:cnode15783", 00:12:50.115 "model_number": "SPDK_Controller\u001f", 00:12:50.115 "method": "nvmf_create_subsystem", 00:12:50.115 "req_id": 1 00:12:50.115 } 00:12:50.115 Got JSON-RPC error response 00:12:50.115 response: 00:12:50.115 { 00:12:50.115 "code": -32602, 00:12:50.115 "message": "Invalid MN SPDK_Controller\u001f" 00:12:50.115 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 123 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7b' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='{' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 38 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x26' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='&' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 46 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2e' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=. 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 52 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x34' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=4 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 72 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x48' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=H 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 97 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x61' 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=a 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.115 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.374 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 86 00:12:50.374 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x56' 00:12:50.374 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=V 00:12:50.374 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.374 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.374 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 56 00:12:50.374 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x38' 00:12:50.374 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=8 00:12:50.374 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.375 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.375 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:50.375 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:50.375 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:50.375 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.375 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.375 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:50.375 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:50.375 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:50.375 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.375 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.375 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ { == \- ]] 00:12:50.375 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '{4*&Aq/IKsR.4HuFaV8g;' 00:12:50.375 10:30:30 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s '{4*&Aq/IKsR.4HuFaV8g;' nqn.2016-06.io.spdk:cnode5306 00:12:50.375 [2024-11-20 10:30:31.034215] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode5306: invalid serial number '{4*&Aq/IKsR.4HuFaV8g;' 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:12:50.375 { 00:12:50.375 "nqn": "nqn.2016-06.io.spdk:cnode5306", 00:12:50.375 "serial_number": "{4*&Aq/IKsR.4HuFaV8g;", 00:12:50.375 "method": "nvmf_create_subsystem", 00:12:50.375 "req_id": 1 00:12:50.375 } 00:12:50.375 Got JSON-RPC error response 00:12:50.375 response: 00:12:50.375 { 00:12:50.375 "code": -32602, 00:12:50.375 "message": "Invalid SN {4*&Aq/IKsR.4HuFaV8g;" 00:12:50.375 }' 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:12:50.375 { 00:12:50.375 "nqn": "nqn.2016-06.io.spdk:cnode5306", 00:12:50.375 "serial_number": "{4*&Aq/IKsR.4HuFaV8g;", 00:12:50.375 "method": "nvmf_create_subsystem", 00:12:50.375 "req_id": 1 00:12:50.375 } 00:12:50.375 Got JSON-RPC error response 00:12:50.375 response: 00:12:50.375 { 00:12:50.375 "code": -32602, 00:12:50.375 "message": "Invalid SN {4*&Aq/IKsR.4HuFaV8g;" 00:12:50.375 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 106 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6a' 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=j 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 79 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4f' 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=O 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.375 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.634 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:12:50.634 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:12:50.634 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:12:50.634 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.634 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.634 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 125 00:12:50.634 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7d' 00:12:50.634 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='}' 00:12:50.634 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.634 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.634 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 82 00:12:50.634 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x52' 00:12:50.634 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=R 00:12:50.634 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.634 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.634 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:12:50.634 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:12:50.634 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:12:50.634 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 37 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x25' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=% 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 75 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4b' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=K 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 62 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3e' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='>' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 45 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2d' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=- 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 63 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3f' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='?' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 84 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x54' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=T 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 127 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7f' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=$'\177' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 78 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4e' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=N 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 49 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x31' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=1 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 126 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7e' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='~' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 59 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3b' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=';' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:12:50.635 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 87 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x57' 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=W 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 67 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x43' 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=C 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 101 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x65' 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=e 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ w == \- ]] 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'wjOA}R\%K>-Lc~@?TBNC1z~!XCZ;teWB C5Feug' 00:12:50.636 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d 'wjOA}R\%K>-Lc~@?TBNC1z~!XCZ;teWB C5Feug' nqn.2016-06.io.spdk:cnode556 00:12:50.895 [2024-11-20 10:30:31.511802] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode556: invalid model number 'wjOA}R\%K>-Lc~@?TBNC1z~!XCZ;teWB C5Feug' 00:12:50.895 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # out='request: 00:12:50.895 { 00:12:50.895 "nqn": "nqn.2016-06.io.spdk:cnode556", 00:12:50.895 "model_number": "wj\u007fOA}R\\%K>-Lc~@?TB\u007fNC1z~!XCZ;teWB C5Feug", 00:12:50.895 "method": "nvmf_create_subsystem", 00:12:50.895 "req_id": 1 00:12:50.895 } 00:12:50.895 Got JSON-RPC error response 00:12:50.895 response: 00:12:50.895 { 00:12:50.895 "code": -32602, 00:12:50.895 "message": "Invalid MN wj\u007fOA}R\\%K>-Lc~@?TB\u007fNC1z~!XCZ;teWB C5Feug" 00:12:50.895 }' 00:12:50.895 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@59 -- # [[ request: 00:12:50.895 { 00:12:50.895 "nqn": "nqn.2016-06.io.spdk:cnode556", 00:12:50.895 "model_number": "wj\u007fOA}R\\%K>-Lc~@?TB\u007fNC1z~!XCZ;teWB C5Feug", 00:12:50.895 "method": "nvmf_create_subsystem", 00:12:50.895 "req_id": 1 00:12:50.895 } 00:12:50.895 Got JSON-RPC error response 00:12:50.895 response: 00:12:50.895 { 00:12:50.895 "code": -32602, 00:12:50.895 "message": "Invalid MN wj\u007fOA}R\\%K>-Lc~@?TB\u007fNC1z~!XCZ;teWB C5Feug" 00:12:50.895 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:12:50.895 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport --trtype tcp 00:12:51.222 [2024-11-20 10:30:31.720523] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:51.222 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode -s SPDK001 -a 00:12:51.481 10:30:31 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode -t tcp -a 10.0.0.1 -s 4421 00:12:51.481 [2024-11-20 10:30:32.121850] nvmf_rpc.c: 783:nvmf_rpc_listen_paused: *ERROR*: Unable to remove listener, rc -2 00:12:51.481 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@64 -- # out='request: 00:12:51.481 { 00:12:51.481 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:51.481 "listen_address": { 00:12:51.481 "trtype": "tcp", 00:12:51.481 "traddr": "10.0.0.1", 00:12:51.481 "trsvcid": "4421" 00:12:51.481 }, 00:12:51.481 "method": "nvmf_subsystem_remove_listener", 00:12:51.481 "req_id": 1 00:12:51.481 } 00:12:51.481 Got JSON-RPC error response 00:12:51.481 response: 00:12:51.481 { 00:12:51.481 "code": -32602, 00:12:51.481 "message": "Invalid parameters" 00:12:51.481 }' 00:12:51.481 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@65 -- # [[ request: 00:12:51.481 { 00:12:51.481 "nqn": "nqn.2016-06.io.spdk:cnode", 00:12:51.481 "listen_address": { 00:12:51.481 "trtype": "tcp", 00:12:51.481 "traddr": "10.0.0.1", 00:12:51.481 "trsvcid": "4421" 00:12:51.481 }, 00:12:51.481 "method": "nvmf_subsystem_remove_listener", 00:12:51.481 "req_id": 1 00:12:51.481 } 00:12:51.481 Got JSON-RPC error response 00:12:51.481 response: 00:12:51.481 { 00:12:51.481 "code": -32602, 00:12:51.481 "message": "Invalid parameters" 00:12:51.481 } != *\U\n\a\b\l\e\ \t\o\ \s\t\o\p\ \l\i\s\t\e\n\e\r\.* ]] 00:12:51.481 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode29077 -i 0 00:12:51.740 [2024-11-20 10:30:32.318473] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode29077: invalid cntlid range [0-65519] 00:12:51.740 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@68 -- # out='request: 00:12:51.740 { 00:12:51.740 "nqn": "nqn.2016-06.io.spdk:cnode29077", 00:12:51.740 "min_cntlid": 0, 00:12:51.740 "method": "nvmf_create_subsystem", 00:12:51.740 "req_id": 1 00:12:51.740 } 00:12:51.740 Got JSON-RPC error response 00:12:51.740 response: 00:12:51.740 { 00:12:51.740 "code": -32602, 00:12:51.740 "message": "Invalid cntlid range [0-65519]" 00:12:51.740 }' 00:12:51.740 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@69 -- # [[ request: 00:12:51.740 { 00:12:51.740 "nqn": "nqn.2016-06.io.spdk:cnode29077", 00:12:51.740 "min_cntlid": 0, 00:12:51.740 "method": "nvmf_create_subsystem", 00:12:51.740 "req_id": 1 00:12:51.740 } 00:12:51.740 Got JSON-RPC error response 00:12:51.740 response: 00:12:51.740 { 00:12:51.740 "code": -32602, 00:12:51.740 "message": "Invalid cntlid range [0-65519]" 00:12:51.740 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:51.740 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode24487 -i 65520 00:12:51.999 [2024-11-20 10:30:32.519151] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode24487: invalid cntlid range [65520-65519] 00:12:51.999 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@70 -- # out='request: 00:12:51.999 { 00:12:51.999 "nqn": "nqn.2016-06.io.spdk:cnode24487", 00:12:51.999 "min_cntlid": 65520, 00:12:51.999 "method": "nvmf_create_subsystem", 00:12:51.999 "req_id": 1 00:12:51.999 } 00:12:51.999 Got JSON-RPC error response 00:12:51.999 response: 00:12:51.999 { 00:12:51.999 "code": -32602, 00:12:51.999 "message": "Invalid cntlid range [65520-65519]" 00:12:51.999 }' 00:12:51.999 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@71 -- # [[ request: 00:12:51.999 { 00:12:51.999 "nqn": "nqn.2016-06.io.spdk:cnode24487", 00:12:51.999 "min_cntlid": 65520, 00:12:51.999 "method": "nvmf_create_subsystem", 00:12:51.999 "req_id": 1 00:12:51.999 } 00:12:51.999 Got JSON-RPC error response 00:12:51.999 response: 00:12:51.999 { 00:12:51.999 "code": -32602, 00:12:51.999 "message": "Invalid cntlid range [65520-65519]" 00:12:51.999 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:51.999 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode15770 -I 0 00:12:51.999 [2024-11-20 10:30:32.707777] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode15770: invalid cntlid range [1-0] 00:12:52.258 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@72 -- # out='request: 00:12:52.258 { 00:12:52.258 "nqn": "nqn.2016-06.io.spdk:cnode15770", 00:12:52.258 "max_cntlid": 0, 00:12:52.258 "method": "nvmf_create_subsystem", 00:12:52.258 "req_id": 1 00:12:52.258 } 00:12:52.258 Got JSON-RPC error response 00:12:52.258 response: 00:12:52.258 { 00:12:52.258 "code": -32602, 00:12:52.258 "message": "Invalid cntlid range [1-0]" 00:12:52.258 }' 00:12:52.258 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@73 -- # [[ request: 00:12:52.258 { 00:12:52.258 "nqn": "nqn.2016-06.io.spdk:cnode15770", 00:12:52.258 "max_cntlid": 0, 00:12:52.258 "method": "nvmf_create_subsystem", 00:12:52.258 "req_id": 1 00:12:52.258 } 00:12:52.258 Got JSON-RPC error response 00:12:52.258 response: 00:12:52.258 { 00:12:52.258 "code": -32602, 00:12:52.258 "message": "Invalid cntlid range [1-0]" 00:12:52.258 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:52.258 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode25827 -I 65520 00:12:52.258 [2024-11-20 10:30:32.904440] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode25827: invalid cntlid range [1-65520] 00:12:52.258 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@74 -- # out='request: 00:12:52.258 { 00:12:52.258 "nqn": "nqn.2016-06.io.spdk:cnode25827", 00:12:52.258 "max_cntlid": 65520, 00:12:52.258 "method": "nvmf_create_subsystem", 00:12:52.258 "req_id": 1 00:12:52.258 } 00:12:52.258 Got JSON-RPC error response 00:12:52.258 response: 00:12:52.258 { 00:12:52.258 "code": -32602, 00:12:52.258 "message": "Invalid cntlid range [1-65520]" 00:12:52.258 }' 00:12:52.258 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@75 -- # [[ request: 00:12:52.258 { 00:12:52.258 "nqn": "nqn.2016-06.io.spdk:cnode25827", 00:12:52.258 "max_cntlid": 65520, 00:12:52.258 "method": "nvmf_create_subsystem", 00:12:52.258 "req_id": 1 00:12:52.258 } 00:12:52.258 Got JSON-RPC error response 00:12:52.258 response: 00:12:52.258 { 00:12:52.258 "code": -32602, 00:12:52.258 "message": "Invalid cntlid range [1-65520]" 00:12:52.258 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:52.258 10:30:32 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode27683 -i 6 -I 5 00:12:52.517 [2024-11-20 10:30:33.121224] nvmf_rpc.c: 434:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode27683: invalid cntlid range [6-5] 00:12:52.517 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@78 -- # out='request: 00:12:52.517 { 00:12:52.517 "nqn": "nqn.2016-06.io.spdk:cnode27683", 00:12:52.517 "min_cntlid": 6, 00:12:52.517 "max_cntlid": 5, 00:12:52.517 "method": "nvmf_create_subsystem", 00:12:52.517 "req_id": 1 00:12:52.517 } 00:12:52.517 Got JSON-RPC error response 00:12:52.517 response: 00:12:52.517 { 00:12:52.517 "code": -32602, 00:12:52.517 "message": "Invalid cntlid range [6-5]" 00:12:52.517 }' 00:12:52.517 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@79 -- # [[ request: 00:12:52.517 { 00:12:52.517 "nqn": "nqn.2016-06.io.spdk:cnode27683", 00:12:52.517 "min_cntlid": 6, 00:12:52.517 "max_cntlid": 5, 00:12:52.517 "method": "nvmf_create_subsystem", 00:12:52.517 "req_id": 1 00:12:52.517 } 00:12:52.517 Got JSON-RPC error response 00:12:52.517 response: 00:12:52.517 { 00:12:52.517 "code": -32602, 00:12:52.517 "message": "Invalid cntlid range [6-5]" 00:12:52.517 } == *\I\n\v\a\l\i\d\ \c\n\t\l\i\d\ \r\a\n\g\e* ]] 00:12:52.517 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target --name foobar 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@82 -- # out='request: 00:12:52.776 { 00:12:52.776 "name": "foobar", 00:12:52.776 "method": "nvmf_delete_target", 00:12:52.776 "req_id": 1 00:12:52.776 } 00:12:52.776 Got JSON-RPC error response 00:12:52.776 response: 00:12:52.776 { 00:12:52.776 "code": -32602, 00:12:52.776 "message": "The specified target doesn'\''t exist, cannot delete it." 00:12:52.776 }' 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@83 -- # [[ request: 00:12:52.776 { 00:12:52.776 "name": "foobar", 00:12:52.776 "method": "nvmf_delete_target", 00:12:52.776 "req_id": 1 00:12:52.776 } 00:12:52.776 Got JSON-RPC error response 00:12:52.776 response: 00:12:52.776 { 00:12:52.776 "code": -32602, 00:12:52.776 "message": "The specified target doesn't exist, cannot delete it." 00:12:52.776 } == *\T\h\e\ \s\p\e\c\i\f\i\e\d\ \t\a\r\g\e\t\ \d\o\e\s\n\'\t\ \e\x\i\s\t\,\ \c\a\n\n\o\t\ \d\e\l\e\t\e\ \i\t\.* ]] 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@86 -- # nvmftestfini 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # nvmfcleanup 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@99 -- # sync 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@102 -- # set +e 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@103 -- # for i in {1..20} 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:12:52.776 rmmod nvme_tcp 00:12:52.776 rmmod nvme_fabrics 00:12:52.776 rmmod nvme_keyring 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@106 -- # set -e 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@107 -- # return 0 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@336 -- # '[' -n 3165717 ']' 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@337 -- # killprocess 3165717 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@954 -- # '[' -z 3165717 ']' 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@958 -- # kill -0 3165717 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # uname 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3165717 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3165717' 00:12:52.776 killing process with pid 3165717 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@973 -- # kill 3165717 00:12:52.776 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@978 -- # wait 3165717 00:12:53.035 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:12:53.035 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # nvmf_fini 00:12:53.035 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@264 -- # local dev 00:12:53.035 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@267 -- # remove_target_ns 00:12:53.035 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:53.035 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:53.035 10:30:33 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@268 -- # delete_main_bridge 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@130 -- # return 0 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # _dev=0 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@41 -- # dev_map=() 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/setup.sh@284 -- # iptr 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@542 -- # iptables-save 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@542 -- # iptables-restore 00:12:54.940 00:12:54.940 real 0m12.165s 00:12:54.940 user 0m18.682s 00:12:54.940 sys 0m5.424s 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:12:54.940 ************************************ 00:12:54.940 END TEST nvmf_invalid 00:12:54.940 ************************************ 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:54.940 10:30:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:55.199 ************************************ 00:12:55.199 START TEST nvmf_connect_stress 00:12:55.199 ************************************ 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:12:55.199 * Looking for test storage... 00:12:55.199 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # IFS=.-: 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@336 -- # read -ra ver1 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # IFS=.-: 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@337 -- # read -ra ver2 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@338 -- # local 'op=<' 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@340 -- # ver1_l=2 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@341 -- # ver2_l=1 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@344 -- # case "$op" in 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@345 -- # : 1 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # decimal 1 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=1 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 1 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # decimal 2 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@353 -- # local d=2 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@355 -- # echo 2 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:12:55.199 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@368 -- # return 0 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:12:55.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.200 --rc genhtml_branch_coverage=1 00:12:55.200 --rc genhtml_function_coverage=1 00:12:55.200 --rc genhtml_legend=1 00:12:55.200 --rc geninfo_all_blocks=1 00:12:55.200 --rc geninfo_unexecuted_blocks=1 00:12:55.200 00:12:55.200 ' 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:12:55.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.200 --rc genhtml_branch_coverage=1 00:12:55.200 --rc genhtml_function_coverage=1 00:12:55.200 --rc genhtml_legend=1 00:12:55.200 --rc geninfo_all_blocks=1 00:12:55.200 --rc geninfo_unexecuted_blocks=1 00:12:55.200 00:12:55.200 ' 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:12:55.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.200 --rc genhtml_branch_coverage=1 00:12:55.200 --rc genhtml_function_coverage=1 00:12:55.200 --rc genhtml_legend=1 00:12:55.200 --rc geninfo_all_blocks=1 00:12:55.200 --rc geninfo_unexecuted_blocks=1 00:12:55.200 00:12:55.200 ' 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:12:55.200 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:12:55.200 --rc genhtml_branch_coverage=1 00:12:55.200 --rc genhtml_function_coverage=1 00:12:55.200 --rc genhtml_legend=1 00:12:55.200 --rc geninfo_all_blocks=1 00:12:55.200 --rc geninfo_unexecuted_blocks=1 00:12:55.200 00:12:55.200 ' 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@50 -- # : 0 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:12:55.200 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:12:55.200 10:30:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@135 -- # net_devs=() 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@136 -- # e810=() 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@136 -- # local -ga e810 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@137 -- # x722=() 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@137 -- # local -ga x722 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@138 -- # mlx=() 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:01.767 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:01.767 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:01.768 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:01.768 Found net devices under 0000:86:00.0: cvl_0_0 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:01.768 Found net devices under 0000:86:00.1: cvl_0_1 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@257 -- # create_target_ns 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # ips=() 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:01.768 10.0.0.1 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:01.768 10.0.0.2 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:01.768 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:01.769 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.769 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.439 ms 00:13:01.769 00:13:01.769 --- 10.0.0.1 ping statistics --- 00:13:01.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.769 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:13:01.769 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.769 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:13:01.769 00:13:01.769 --- 10.0.0.2 ping statistics --- 00:13:01.769 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.769 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # return 0 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # return 1 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev= 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@169 -- # return 0 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:13:01.769 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@107 -- # local dev=target1 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@109 -- # return 1 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@168 -- # dev= 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@169 -- # return 0 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:01.770 10:30:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:01.770 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:01.770 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:01.770 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:01.770 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.770 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@328 -- # nvmfpid=3170123 00:13:01.770 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:01.770 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # waitforlisten 3170123 00:13:01.770 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@835 -- # '[' -z 3170123 ']' 00:13:01.770 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.770 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:01.770 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.770 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.770 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:01.770 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:01.770 [2024-11-20 10:30:42.093333] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:13:01.770 [2024-11-20 10:30:42.093377] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.770 [2024-11-20 10:30:42.169535] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:01.770 [2024-11-20 10:30:42.210260] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.770 [2024-11-20 10:30:42.210295] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.770 [2024-11-20 10:30:42.210301] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.770 [2024-11-20 10:30:42.210307] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.770 [2024-11-20 10:30:42.210312] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.770 [2024-11-20 10:30:42.211724] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.770 [2024-11-20 10:30:42.211810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.770 [2024-11-20 10:30:42.211810] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:02.337 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:02.337 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@868 -- # return 0 00:13:02.337 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:02.337 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:02.337 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.337 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.337 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:02.337 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.337 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.337 [2024-11-20 10:30:42.984611] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.337 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.337 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:02.337 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.337 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.337 10:30:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.337 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.337 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.337 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.337 [2024-11-20 10:30:43.004807] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.337 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.337 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:02.337 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.337 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.337 NULL1 00:13:02.337 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.337 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=3170334 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.338 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.596 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:02.855 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:02.855 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:02.855 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:02.855 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:02.855 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.113 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.113 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:03.113 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.113 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.113 10:30:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.371 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.371 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:03.371 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.371 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.371 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:03.938 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:03.938 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:03.938 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:03.938 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:03.938 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.222 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.222 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:04.222 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.222 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.223 10:30:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.506 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.506 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:04.506 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.506 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.506 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:04.766 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:04.766 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:04.766 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:04.766 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:04.766 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.025 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.025 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:05.025 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.025 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.025 10:30:45 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.592 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.592 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:05.592 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.592 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.592 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:05.850 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:05.850 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:05.850 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:05.850 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:05.850 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.109 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.109 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:06.109 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.109 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.109 10:30:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.367 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.367 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:06.367 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.367 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.367 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:06.625 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:06.625 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:06.625 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:06.625 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:06.625 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.190 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.190 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:07.190 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.190 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.190 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.449 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.449 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:07.449 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.449 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.449 10:30:47 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.707 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.707 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:07.707 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.707 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.707 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:07.965 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:07.965 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:07.965 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:07.965 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:07.965 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.531 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.532 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:08.532 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.532 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.532 10:30:48 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:08.789 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:08.789 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:08.789 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:08.789 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:08.789 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.047 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.047 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:09.047 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.047 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.047 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.305 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.305 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:09.305 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.305 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.305 10:30:49 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:09.563 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:09.563 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:09.563 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:09.563 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:09.563 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.130 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.130 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:10.130 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.130 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.130 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.388 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.388 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:10.388 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.388 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.388 10:30:50 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.646 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.646 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:10.646 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.646 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.646 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:10.903 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:10.903 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:10.903 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:10.903 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:10.903 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.471 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.471 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:11.471 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.471 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.471 10:30:51 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.729 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.729 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:11.729 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.729 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.729 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:11.987 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:11.987 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:11.987 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:11.988 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:11.988 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.246 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.246 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:12.246 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.246 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.246 10:30:52 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.505 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:12.505 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:12.505 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:12.505 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:12.505 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:12.763 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 3170334 00:13:13.022 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (3170334) - No such process 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 3170334 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@99 -- # sync 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@102 -- # set +e 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:13.022 rmmod nvme_tcp 00:13:13.022 rmmod nvme_fabrics 00:13:13.022 rmmod nvme_keyring 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@106 -- # set -e 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@107 -- # return 0 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@336 -- # '[' -n 3170123 ']' 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@337 -- # killprocess 3170123 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # '[' -z 3170123 ']' 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # kill -0 3170123 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # uname 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3170123 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3170123' 00:13:13.022 killing process with pid 3170123 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@973 -- # kill 3170123 00:13:13.022 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@978 -- # wait 3170123 00:13:13.281 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:13.281 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:13:13.281 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@264 -- # local dev 00:13:13.281 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@267 -- # remove_target_ns 00:13:13.281 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:13.281 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:13.281 10:30:53 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@268 -- # delete_main_bridge 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@130 -- # return 0 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # _dev=0 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/setup.sh@284 -- # iptr 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@542 -- # iptables-save 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@542 -- # iptables-restore 00:13:15.183 00:13:15.183 real 0m20.179s 00:13:15.183 user 0m42.613s 00:13:15.183 sys 0m8.691s 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:15.183 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:15.183 ************************************ 00:13:15.183 END TEST nvmf_connect_stress 00:13:15.183 ************************************ 00:13:15.443 10:30:55 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:15.443 10:30:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:15.443 10:30:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:15.443 10:30:55 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:15.443 ************************************ 00:13:15.443 START TEST nvmf_fused_ordering 00:13:15.443 ************************************ 00:13:15.443 10:30:55 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:15.443 * Looking for test storage... 00:13:15.443 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lcov --version 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # IFS=.-: 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@336 -- # read -ra ver1 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # IFS=.-: 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@337 -- # read -ra ver2 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@338 -- # local 'op=<' 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@340 -- # ver1_l=2 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@341 -- # ver2_l=1 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@344 -- # case "$op" in 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@345 -- # : 1 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # decimal 1 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=1 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 1 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@365 -- # ver1[v]=1 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # decimal 2 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@353 -- # local d=2 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@355 -- # echo 2 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@366 -- # ver2[v]=2 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@368 -- # return 0 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:15.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.443 --rc genhtml_branch_coverage=1 00:13:15.443 --rc genhtml_function_coverage=1 00:13:15.443 --rc genhtml_legend=1 00:13:15.443 --rc geninfo_all_blocks=1 00:13:15.443 --rc geninfo_unexecuted_blocks=1 00:13:15.443 00:13:15.443 ' 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:15.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.443 --rc genhtml_branch_coverage=1 00:13:15.443 --rc genhtml_function_coverage=1 00:13:15.443 --rc genhtml_legend=1 00:13:15.443 --rc geninfo_all_blocks=1 00:13:15.443 --rc geninfo_unexecuted_blocks=1 00:13:15.443 00:13:15.443 ' 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:15.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.443 --rc genhtml_branch_coverage=1 00:13:15.443 --rc genhtml_function_coverage=1 00:13:15.443 --rc genhtml_legend=1 00:13:15.443 --rc geninfo_all_blocks=1 00:13:15.443 --rc geninfo_unexecuted_blocks=1 00:13:15.443 00:13:15.443 ' 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:15.443 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:15.443 --rc genhtml_branch_coverage=1 00:13:15.443 --rc genhtml_function_coverage=1 00:13:15.443 --rc genhtml_legend=1 00:13:15.443 --rc geninfo_all_blocks=1 00:13:15.443 --rc geninfo_unexecuted_blocks=1 00:13:15.443 00:13:15.443 ' 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:15.443 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@15 -- # shopt -s extglob 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@50 -- # : 0 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:15.444 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # remove_target_ns 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # xtrace_disable 00:13:15.444 10:30:56 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@131 -- # pci_devs=() 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@135 -- # net_devs=() 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@136 -- # e810=() 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@136 -- # local -ga e810 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@137 -- # x722=() 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@137 -- # local -ga x722 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@138 -- # mlx=() 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@138 -- # local -ga mlx 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:22.014 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:22.014 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:22.014 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:22.015 Found net devices under 0000:86:00.0: cvl_0_0 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:22.015 Found net devices under 0000:86:00.1: cvl_0_1 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@262 -- # is_hw=yes 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@257 -- # create_target_ns 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@28 -- # local -g _dev 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # ips=() 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:22.015 10:31:01 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772161 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:22.015 10.0.0.1 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@11 -- # local val=167772162 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:22.015 10.0.0.2 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:13:22.015 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:22.016 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:22.016 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.436 ms 00:13:22.016 00:13:22.016 --- 10.0.0.1 ping statistics --- 00:13:22.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.016 rtt min/avg/max/mdev = 0.436/0.436/0.436/0.000 ms 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=target0 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:13:22.016 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:22.016 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:13:22.016 00:13:22.016 --- 10.0.0.2 ping statistics --- 00:13:22.016 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:22.016 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # return 0 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # return 1 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev= 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@169 -- # return 0 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:22.016 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=target0 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@107 -- # local dev=target1 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@109 -- # return 1 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@168 -- # dev= 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@169 -- # return 0 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@328 -- # nvmfpid=3175558 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # waitforlisten 3175558 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@835 -- # '[' -z 3175558 ']' 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:22.017 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:22.017 [2024-11-20 10:31:02.343112] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:13:22.017 [2024-11-20 10:31:02.343154] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:22.017 [2024-11-20 10:31:02.422154] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:22.017 [2024-11-20 10:31:02.462542] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:22.017 [2024-11-20 10:31:02.462579] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:22.017 [2024-11-20 10:31:02.462586] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:22.017 [2024-11-20 10:31:02.462592] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:22.017 [2024-11-20 10:31:02.462597] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:22.017 [2024-11-20 10:31:02.463184] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@868 -- # return 0 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:22.017 [2024-11-20 10:31:02.597520] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:22.017 [2024-11-20 10:31:02.617711] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:22.017 NULL1 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:22.017 10:31:02 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:13:22.017 [2024-11-20 10:31:02.675064] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:13:22.017 [2024-11-20 10:31:02.675096] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3175737 ] 00:13:22.585 Attached to nqn.2016-06.io.spdk:cnode1 00:13:22.585 Namespace ID: 1 size: 1GB 00:13:22.585 fused_ordering(0) 00:13:22.585 fused_ordering(1) 00:13:22.585 fused_ordering(2) 00:13:22.585 fused_ordering(3) 00:13:22.585 fused_ordering(4) 00:13:22.585 fused_ordering(5) 00:13:22.585 fused_ordering(6) 00:13:22.585 fused_ordering(7) 00:13:22.585 fused_ordering(8) 00:13:22.585 fused_ordering(9) 00:13:22.585 fused_ordering(10) 00:13:22.585 fused_ordering(11) 00:13:22.585 fused_ordering(12) 00:13:22.585 fused_ordering(13) 00:13:22.585 fused_ordering(14) 00:13:22.585 fused_ordering(15) 00:13:22.585 fused_ordering(16) 00:13:22.585 fused_ordering(17) 00:13:22.585 fused_ordering(18) 00:13:22.585 fused_ordering(19) 00:13:22.585 fused_ordering(20) 00:13:22.585 fused_ordering(21) 00:13:22.585 fused_ordering(22) 00:13:22.585 fused_ordering(23) 00:13:22.585 fused_ordering(24) 00:13:22.585 fused_ordering(25) 00:13:22.585 fused_ordering(26) 00:13:22.585 fused_ordering(27) 00:13:22.585 fused_ordering(28) 00:13:22.585 fused_ordering(29) 00:13:22.585 fused_ordering(30) 00:13:22.585 fused_ordering(31) 00:13:22.585 fused_ordering(32) 00:13:22.585 fused_ordering(33) 00:13:22.585 fused_ordering(34) 00:13:22.585 fused_ordering(35) 00:13:22.585 fused_ordering(36) 00:13:22.585 fused_ordering(37) 00:13:22.585 fused_ordering(38) 00:13:22.585 fused_ordering(39) 00:13:22.585 fused_ordering(40) 00:13:22.585 fused_ordering(41) 00:13:22.585 fused_ordering(42) 00:13:22.585 fused_ordering(43) 00:13:22.585 fused_ordering(44) 00:13:22.585 fused_ordering(45) 00:13:22.585 fused_ordering(46) 00:13:22.585 fused_ordering(47) 00:13:22.585 fused_ordering(48) 00:13:22.585 fused_ordering(49) 00:13:22.585 fused_ordering(50) 00:13:22.585 fused_ordering(51) 00:13:22.585 fused_ordering(52) 00:13:22.585 fused_ordering(53) 00:13:22.585 fused_ordering(54) 00:13:22.585 fused_ordering(55) 00:13:22.585 fused_ordering(56) 00:13:22.585 fused_ordering(57) 00:13:22.585 fused_ordering(58) 00:13:22.585 fused_ordering(59) 00:13:22.585 fused_ordering(60) 00:13:22.585 fused_ordering(61) 00:13:22.585 fused_ordering(62) 00:13:22.585 fused_ordering(63) 00:13:22.585 fused_ordering(64) 00:13:22.585 fused_ordering(65) 00:13:22.585 fused_ordering(66) 00:13:22.585 fused_ordering(67) 00:13:22.585 fused_ordering(68) 00:13:22.585 fused_ordering(69) 00:13:22.585 fused_ordering(70) 00:13:22.585 fused_ordering(71) 00:13:22.585 fused_ordering(72) 00:13:22.585 fused_ordering(73) 00:13:22.585 fused_ordering(74) 00:13:22.585 fused_ordering(75) 00:13:22.585 fused_ordering(76) 00:13:22.585 fused_ordering(77) 00:13:22.585 fused_ordering(78) 00:13:22.585 fused_ordering(79) 00:13:22.585 fused_ordering(80) 00:13:22.585 fused_ordering(81) 00:13:22.585 fused_ordering(82) 00:13:22.585 fused_ordering(83) 00:13:22.585 fused_ordering(84) 00:13:22.585 fused_ordering(85) 00:13:22.585 fused_ordering(86) 00:13:22.585 fused_ordering(87) 00:13:22.585 fused_ordering(88) 00:13:22.585 fused_ordering(89) 00:13:22.585 fused_ordering(90) 00:13:22.585 fused_ordering(91) 00:13:22.585 fused_ordering(92) 00:13:22.585 fused_ordering(93) 00:13:22.585 fused_ordering(94) 00:13:22.585 fused_ordering(95) 00:13:22.585 fused_ordering(96) 00:13:22.585 fused_ordering(97) 00:13:22.585 fused_ordering(98) 00:13:22.585 fused_ordering(99) 00:13:22.585 fused_ordering(100) 00:13:22.585 fused_ordering(101) 00:13:22.585 fused_ordering(102) 00:13:22.585 fused_ordering(103) 00:13:22.585 fused_ordering(104) 00:13:22.585 fused_ordering(105) 00:13:22.585 fused_ordering(106) 00:13:22.585 fused_ordering(107) 00:13:22.585 fused_ordering(108) 00:13:22.585 fused_ordering(109) 00:13:22.585 fused_ordering(110) 00:13:22.585 fused_ordering(111) 00:13:22.585 fused_ordering(112) 00:13:22.585 fused_ordering(113) 00:13:22.585 fused_ordering(114) 00:13:22.585 fused_ordering(115) 00:13:22.585 fused_ordering(116) 00:13:22.585 fused_ordering(117) 00:13:22.585 fused_ordering(118) 00:13:22.585 fused_ordering(119) 00:13:22.585 fused_ordering(120) 00:13:22.585 fused_ordering(121) 00:13:22.585 fused_ordering(122) 00:13:22.585 fused_ordering(123) 00:13:22.585 fused_ordering(124) 00:13:22.585 fused_ordering(125) 00:13:22.585 fused_ordering(126) 00:13:22.585 fused_ordering(127) 00:13:22.585 fused_ordering(128) 00:13:22.585 fused_ordering(129) 00:13:22.585 fused_ordering(130) 00:13:22.585 fused_ordering(131) 00:13:22.585 fused_ordering(132) 00:13:22.585 fused_ordering(133) 00:13:22.585 fused_ordering(134) 00:13:22.585 fused_ordering(135) 00:13:22.585 fused_ordering(136) 00:13:22.585 fused_ordering(137) 00:13:22.585 fused_ordering(138) 00:13:22.585 fused_ordering(139) 00:13:22.585 fused_ordering(140) 00:13:22.585 fused_ordering(141) 00:13:22.585 fused_ordering(142) 00:13:22.585 fused_ordering(143) 00:13:22.585 fused_ordering(144) 00:13:22.585 fused_ordering(145) 00:13:22.585 fused_ordering(146) 00:13:22.585 fused_ordering(147) 00:13:22.585 fused_ordering(148) 00:13:22.585 fused_ordering(149) 00:13:22.585 fused_ordering(150) 00:13:22.585 fused_ordering(151) 00:13:22.585 fused_ordering(152) 00:13:22.585 fused_ordering(153) 00:13:22.585 fused_ordering(154) 00:13:22.585 fused_ordering(155) 00:13:22.585 fused_ordering(156) 00:13:22.585 fused_ordering(157) 00:13:22.585 fused_ordering(158) 00:13:22.585 fused_ordering(159) 00:13:22.585 fused_ordering(160) 00:13:22.585 fused_ordering(161) 00:13:22.585 fused_ordering(162) 00:13:22.585 fused_ordering(163) 00:13:22.585 fused_ordering(164) 00:13:22.585 fused_ordering(165) 00:13:22.585 fused_ordering(166) 00:13:22.585 fused_ordering(167) 00:13:22.585 fused_ordering(168) 00:13:22.585 fused_ordering(169) 00:13:22.585 fused_ordering(170) 00:13:22.585 fused_ordering(171) 00:13:22.585 fused_ordering(172) 00:13:22.586 fused_ordering(173) 00:13:22.586 fused_ordering(174) 00:13:22.586 fused_ordering(175) 00:13:22.586 fused_ordering(176) 00:13:22.586 fused_ordering(177) 00:13:22.586 fused_ordering(178) 00:13:22.586 fused_ordering(179) 00:13:22.586 fused_ordering(180) 00:13:22.586 fused_ordering(181) 00:13:22.586 fused_ordering(182) 00:13:22.586 fused_ordering(183) 00:13:22.586 fused_ordering(184) 00:13:22.586 fused_ordering(185) 00:13:22.586 fused_ordering(186) 00:13:22.586 fused_ordering(187) 00:13:22.586 fused_ordering(188) 00:13:22.586 fused_ordering(189) 00:13:22.586 fused_ordering(190) 00:13:22.586 fused_ordering(191) 00:13:22.586 fused_ordering(192) 00:13:22.586 fused_ordering(193) 00:13:22.586 fused_ordering(194) 00:13:22.586 fused_ordering(195) 00:13:22.586 fused_ordering(196) 00:13:22.586 fused_ordering(197) 00:13:22.586 fused_ordering(198) 00:13:22.586 fused_ordering(199) 00:13:22.586 fused_ordering(200) 00:13:22.586 fused_ordering(201) 00:13:22.586 fused_ordering(202) 00:13:22.586 fused_ordering(203) 00:13:22.586 fused_ordering(204) 00:13:22.586 fused_ordering(205) 00:13:22.845 fused_ordering(206) 00:13:22.845 fused_ordering(207) 00:13:22.845 fused_ordering(208) 00:13:22.845 fused_ordering(209) 00:13:22.845 fused_ordering(210) 00:13:22.845 fused_ordering(211) 00:13:22.845 fused_ordering(212) 00:13:22.845 fused_ordering(213) 00:13:22.845 fused_ordering(214) 00:13:22.845 fused_ordering(215) 00:13:22.845 fused_ordering(216) 00:13:22.845 fused_ordering(217) 00:13:22.845 fused_ordering(218) 00:13:22.845 fused_ordering(219) 00:13:22.845 fused_ordering(220) 00:13:22.845 fused_ordering(221) 00:13:22.845 fused_ordering(222) 00:13:22.845 fused_ordering(223) 00:13:22.845 fused_ordering(224) 00:13:22.845 fused_ordering(225) 00:13:22.845 fused_ordering(226) 00:13:22.845 fused_ordering(227) 00:13:22.845 fused_ordering(228) 00:13:22.845 fused_ordering(229) 00:13:22.845 fused_ordering(230) 00:13:22.845 fused_ordering(231) 00:13:22.845 fused_ordering(232) 00:13:22.845 fused_ordering(233) 00:13:22.845 fused_ordering(234) 00:13:22.845 fused_ordering(235) 00:13:22.845 fused_ordering(236) 00:13:22.845 fused_ordering(237) 00:13:22.845 fused_ordering(238) 00:13:22.845 fused_ordering(239) 00:13:22.845 fused_ordering(240) 00:13:22.845 fused_ordering(241) 00:13:22.845 fused_ordering(242) 00:13:22.845 fused_ordering(243) 00:13:22.845 fused_ordering(244) 00:13:22.845 fused_ordering(245) 00:13:22.845 fused_ordering(246) 00:13:22.845 fused_ordering(247) 00:13:22.845 fused_ordering(248) 00:13:22.845 fused_ordering(249) 00:13:22.845 fused_ordering(250) 00:13:22.845 fused_ordering(251) 00:13:22.845 fused_ordering(252) 00:13:22.845 fused_ordering(253) 00:13:22.845 fused_ordering(254) 00:13:22.845 fused_ordering(255) 00:13:22.845 fused_ordering(256) 00:13:22.845 fused_ordering(257) 00:13:22.845 fused_ordering(258) 00:13:22.845 fused_ordering(259) 00:13:22.845 fused_ordering(260) 00:13:22.845 fused_ordering(261) 00:13:22.845 fused_ordering(262) 00:13:22.845 fused_ordering(263) 00:13:22.845 fused_ordering(264) 00:13:22.845 fused_ordering(265) 00:13:22.845 fused_ordering(266) 00:13:22.845 fused_ordering(267) 00:13:22.845 fused_ordering(268) 00:13:22.845 fused_ordering(269) 00:13:22.845 fused_ordering(270) 00:13:22.845 fused_ordering(271) 00:13:22.845 fused_ordering(272) 00:13:22.845 fused_ordering(273) 00:13:22.845 fused_ordering(274) 00:13:22.845 fused_ordering(275) 00:13:22.845 fused_ordering(276) 00:13:22.845 fused_ordering(277) 00:13:22.845 fused_ordering(278) 00:13:22.845 fused_ordering(279) 00:13:22.845 fused_ordering(280) 00:13:22.845 fused_ordering(281) 00:13:22.845 fused_ordering(282) 00:13:22.845 fused_ordering(283) 00:13:22.845 fused_ordering(284) 00:13:22.845 fused_ordering(285) 00:13:22.845 fused_ordering(286) 00:13:22.845 fused_ordering(287) 00:13:22.845 fused_ordering(288) 00:13:22.845 fused_ordering(289) 00:13:22.845 fused_ordering(290) 00:13:22.845 fused_ordering(291) 00:13:22.845 fused_ordering(292) 00:13:22.845 fused_ordering(293) 00:13:22.845 fused_ordering(294) 00:13:22.845 fused_ordering(295) 00:13:22.845 fused_ordering(296) 00:13:22.845 fused_ordering(297) 00:13:22.845 fused_ordering(298) 00:13:22.845 fused_ordering(299) 00:13:22.845 fused_ordering(300) 00:13:22.845 fused_ordering(301) 00:13:22.845 fused_ordering(302) 00:13:22.845 fused_ordering(303) 00:13:22.845 fused_ordering(304) 00:13:22.845 fused_ordering(305) 00:13:22.845 fused_ordering(306) 00:13:22.845 fused_ordering(307) 00:13:22.845 fused_ordering(308) 00:13:22.845 fused_ordering(309) 00:13:22.845 fused_ordering(310) 00:13:22.845 fused_ordering(311) 00:13:22.845 fused_ordering(312) 00:13:22.845 fused_ordering(313) 00:13:22.845 fused_ordering(314) 00:13:22.845 fused_ordering(315) 00:13:22.845 fused_ordering(316) 00:13:22.845 fused_ordering(317) 00:13:22.845 fused_ordering(318) 00:13:22.845 fused_ordering(319) 00:13:22.845 fused_ordering(320) 00:13:22.845 fused_ordering(321) 00:13:22.845 fused_ordering(322) 00:13:22.845 fused_ordering(323) 00:13:22.845 fused_ordering(324) 00:13:22.845 fused_ordering(325) 00:13:22.845 fused_ordering(326) 00:13:22.845 fused_ordering(327) 00:13:22.845 fused_ordering(328) 00:13:22.845 fused_ordering(329) 00:13:22.845 fused_ordering(330) 00:13:22.845 fused_ordering(331) 00:13:22.845 fused_ordering(332) 00:13:22.845 fused_ordering(333) 00:13:22.845 fused_ordering(334) 00:13:22.845 fused_ordering(335) 00:13:22.845 fused_ordering(336) 00:13:22.845 fused_ordering(337) 00:13:22.845 fused_ordering(338) 00:13:22.845 fused_ordering(339) 00:13:22.845 fused_ordering(340) 00:13:22.845 fused_ordering(341) 00:13:22.845 fused_ordering(342) 00:13:22.845 fused_ordering(343) 00:13:22.845 fused_ordering(344) 00:13:22.845 fused_ordering(345) 00:13:22.845 fused_ordering(346) 00:13:22.845 fused_ordering(347) 00:13:22.845 fused_ordering(348) 00:13:22.845 fused_ordering(349) 00:13:22.845 fused_ordering(350) 00:13:22.845 fused_ordering(351) 00:13:22.845 fused_ordering(352) 00:13:22.845 fused_ordering(353) 00:13:22.845 fused_ordering(354) 00:13:22.845 fused_ordering(355) 00:13:22.845 fused_ordering(356) 00:13:22.845 fused_ordering(357) 00:13:22.845 fused_ordering(358) 00:13:22.845 fused_ordering(359) 00:13:22.845 fused_ordering(360) 00:13:22.845 fused_ordering(361) 00:13:22.845 fused_ordering(362) 00:13:22.845 fused_ordering(363) 00:13:22.845 fused_ordering(364) 00:13:22.845 fused_ordering(365) 00:13:22.845 fused_ordering(366) 00:13:22.845 fused_ordering(367) 00:13:22.845 fused_ordering(368) 00:13:22.845 fused_ordering(369) 00:13:22.845 fused_ordering(370) 00:13:22.845 fused_ordering(371) 00:13:22.845 fused_ordering(372) 00:13:22.845 fused_ordering(373) 00:13:22.845 fused_ordering(374) 00:13:22.845 fused_ordering(375) 00:13:22.845 fused_ordering(376) 00:13:22.845 fused_ordering(377) 00:13:22.845 fused_ordering(378) 00:13:22.845 fused_ordering(379) 00:13:22.845 fused_ordering(380) 00:13:22.845 fused_ordering(381) 00:13:22.845 fused_ordering(382) 00:13:22.845 fused_ordering(383) 00:13:22.845 fused_ordering(384) 00:13:22.845 fused_ordering(385) 00:13:22.845 fused_ordering(386) 00:13:22.845 fused_ordering(387) 00:13:22.845 fused_ordering(388) 00:13:22.845 fused_ordering(389) 00:13:22.845 fused_ordering(390) 00:13:22.845 fused_ordering(391) 00:13:22.845 fused_ordering(392) 00:13:22.845 fused_ordering(393) 00:13:22.845 fused_ordering(394) 00:13:22.845 fused_ordering(395) 00:13:22.845 fused_ordering(396) 00:13:22.845 fused_ordering(397) 00:13:22.845 fused_ordering(398) 00:13:22.845 fused_ordering(399) 00:13:22.845 fused_ordering(400) 00:13:22.845 fused_ordering(401) 00:13:22.845 fused_ordering(402) 00:13:22.845 fused_ordering(403) 00:13:22.845 fused_ordering(404) 00:13:22.845 fused_ordering(405) 00:13:22.845 fused_ordering(406) 00:13:22.845 fused_ordering(407) 00:13:22.845 fused_ordering(408) 00:13:22.845 fused_ordering(409) 00:13:22.845 fused_ordering(410) 00:13:23.104 fused_ordering(411) 00:13:23.104 fused_ordering(412) 00:13:23.104 fused_ordering(413) 00:13:23.104 fused_ordering(414) 00:13:23.104 fused_ordering(415) 00:13:23.104 fused_ordering(416) 00:13:23.104 fused_ordering(417) 00:13:23.104 fused_ordering(418) 00:13:23.104 fused_ordering(419) 00:13:23.104 fused_ordering(420) 00:13:23.104 fused_ordering(421) 00:13:23.104 fused_ordering(422) 00:13:23.104 fused_ordering(423) 00:13:23.104 fused_ordering(424) 00:13:23.104 fused_ordering(425) 00:13:23.104 fused_ordering(426) 00:13:23.104 fused_ordering(427) 00:13:23.104 fused_ordering(428) 00:13:23.104 fused_ordering(429) 00:13:23.104 fused_ordering(430) 00:13:23.104 fused_ordering(431) 00:13:23.104 fused_ordering(432) 00:13:23.104 fused_ordering(433) 00:13:23.104 fused_ordering(434) 00:13:23.104 fused_ordering(435) 00:13:23.104 fused_ordering(436) 00:13:23.104 fused_ordering(437) 00:13:23.104 fused_ordering(438) 00:13:23.104 fused_ordering(439) 00:13:23.104 fused_ordering(440) 00:13:23.104 fused_ordering(441) 00:13:23.104 fused_ordering(442) 00:13:23.104 fused_ordering(443) 00:13:23.104 fused_ordering(444) 00:13:23.104 fused_ordering(445) 00:13:23.104 fused_ordering(446) 00:13:23.104 fused_ordering(447) 00:13:23.104 fused_ordering(448) 00:13:23.104 fused_ordering(449) 00:13:23.104 fused_ordering(450) 00:13:23.104 fused_ordering(451) 00:13:23.104 fused_ordering(452) 00:13:23.104 fused_ordering(453) 00:13:23.104 fused_ordering(454) 00:13:23.104 fused_ordering(455) 00:13:23.104 fused_ordering(456) 00:13:23.104 fused_ordering(457) 00:13:23.104 fused_ordering(458) 00:13:23.104 fused_ordering(459) 00:13:23.104 fused_ordering(460) 00:13:23.104 fused_ordering(461) 00:13:23.104 fused_ordering(462) 00:13:23.104 fused_ordering(463) 00:13:23.104 fused_ordering(464) 00:13:23.104 fused_ordering(465) 00:13:23.104 fused_ordering(466) 00:13:23.104 fused_ordering(467) 00:13:23.104 fused_ordering(468) 00:13:23.104 fused_ordering(469) 00:13:23.104 fused_ordering(470) 00:13:23.104 fused_ordering(471) 00:13:23.104 fused_ordering(472) 00:13:23.104 fused_ordering(473) 00:13:23.104 fused_ordering(474) 00:13:23.104 fused_ordering(475) 00:13:23.104 fused_ordering(476) 00:13:23.104 fused_ordering(477) 00:13:23.104 fused_ordering(478) 00:13:23.104 fused_ordering(479) 00:13:23.104 fused_ordering(480) 00:13:23.104 fused_ordering(481) 00:13:23.104 fused_ordering(482) 00:13:23.104 fused_ordering(483) 00:13:23.104 fused_ordering(484) 00:13:23.104 fused_ordering(485) 00:13:23.104 fused_ordering(486) 00:13:23.104 fused_ordering(487) 00:13:23.104 fused_ordering(488) 00:13:23.104 fused_ordering(489) 00:13:23.104 fused_ordering(490) 00:13:23.104 fused_ordering(491) 00:13:23.104 fused_ordering(492) 00:13:23.104 fused_ordering(493) 00:13:23.104 fused_ordering(494) 00:13:23.104 fused_ordering(495) 00:13:23.104 fused_ordering(496) 00:13:23.104 fused_ordering(497) 00:13:23.104 fused_ordering(498) 00:13:23.104 fused_ordering(499) 00:13:23.104 fused_ordering(500) 00:13:23.104 fused_ordering(501) 00:13:23.104 fused_ordering(502) 00:13:23.104 fused_ordering(503) 00:13:23.104 fused_ordering(504) 00:13:23.104 fused_ordering(505) 00:13:23.104 fused_ordering(506) 00:13:23.104 fused_ordering(507) 00:13:23.104 fused_ordering(508) 00:13:23.104 fused_ordering(509) 00:13:23.104 fused_ordering(510) 00:13:23.104 fused_ordering(511) 00:13:23.104 fused_ordering(512) 00:13:23.104 fused_ordering(513) 00:13:23.104 fused_ordering(514) 00:13:23.104 fused_ordering(515) 00:13:23.104 fused_ordering(516) 00:13:23.104 fused_ordering(517) 00:13:23.104 fused_ordering(518) 00:13:23.104 fused_ordering(519) 00:13:23.104 fused_ordering(520) 00:13:23.104 fused_ordering(521) 00:13:23.104 fused_ordering(522) 00:13:23.104 fused_ordering(523) 00:13:23.104 fused_ordering(524) 00:13:23.104 fused_ordering(525) 00:13:23.104 fused_ordering(526) 00:13:23.104 fused_ordering(527) 00:13:23.104 fused_ordering(528) 00:13:23.104 fused_ordering(529) 00:13:23.104 fused_ordering(530) 00:13:23.104 fused_ordering(531) 00:13:23.104 fused_ordering(532) 00:13:23.105 fused_ordering(533) 00:13:23.105 fused_ordering(534) 00:13:23.105 fused_ordering(535) 00:13:23.105 fused_ordering(536) 00:13:23.105 fused_ordering(537) 00:13:23.105 fused_ordering(538) 00:13:23.105 fused_ordering(539) 00:13:23.105 fused_ordering(540) 00:13:23.105 fused_ordering(541) 00:13:23.105 fused_ordering(542) 00:13:23.105 fused_ordering(543) 00:13:23.105 fused_ordering(544) 00:13:23.105 fused_ordering(545) 00:13:23.105 fused_ordering(546) 00:13:23.105 fused_ordering(547) 00:13:23.105 fused_ordering(548) 00:13:23.105 fused_ordering(549) 00:13:23.105 fused_ordering(550) 00:13:23.105 fused_ordering(551) 00:13:23.105 fused_ordering(552) 00:13:23.105 fused_ordering(553) 00:13:23.105 fused_ordering(554) 00:13:23.105 fused_ordering(555) 00:13:23.105 fused_ordering(556) 00:13:23.105 fused_ordering(557) 00:13:23.105 fused_ordering(558) 00:13:23.105 fused_ordering(559) 00:13:23.105 fused_ordering(560) 00:13:23.105 fused_ordering(561) 00:13:23.105 fused_ordering(562) 00:13:23.105 fused_ordering(563) 00:13:23.105 fused_ordering(564) 00:13:23.105 fused_ordering(565) 00:13:23.105 fused_ordering(566) 00:13:23.105 fused_ordering(567) 00:13:23.105 fused_ordering(568) 00:13:23.105 fused_ordering(569) 00:13:23.105 fused_ordering(570) 00:13:23.105 fused_ordering(571) 00:13:23.105 fused_ordering(572) 00:13:23.105 fused_ordering(573) 00:13:23.105 fused_ordering(574) 00:13:23.105 fused_ordering(575) 00:13:23.105 fused_ordering(576) 00:13:23.105 fused_ordering(577) 00:13:23.105 fused_ordering(578) 00:13:23.105 fused_ordering(579) 00:13:23.105 fused_ordering(580) 00:13:23.105 fused_ordering(581) 00:13:23.105 fused_ordering(582) 00:13:23.105 fused_ordering(583) 00:13:23.105 fused_ordering(584) 00:13:23.105 fused_ordering(585) 00:13:23.105 fused_ordering(586) 00:13:23.105 fused_ordering(587) 00:13:23.105 fused_ordering(588) 00:13:23.105 fused_ordering(589) 00:13:23.105 fused_ordering(590) 00:13:23.105 fused_ordering(591) 00:13:23.105 fused_ordering(592) 00:13:23.105 fused_ordering(593) 00:13:23.105 fused_ordering(594) 00:13:23.105 fused_ordering(595) 00:13:23.105 fused_ordering(596) 00:13:23.105 fused_ordering(597) 00:13:23.105 fused_ordering(598) 00:13:23.105 fused_ordering(599) 00:13:23.105 fused_ordering(600) 00:13:23.105 fused_ordering(601) 00:13:23.105 fused_ordering(602) 00:13:23.105 fused_ordering(603) 00:13:23.105 fused_ordering(604) 00:13:23.105 fused_ordering(605) 00:13:23.105 fused_ordering(606) 00:13:23.105 fused_ordering(607) 00:13:23.105 fused_ordering(608) 00:13:23.105 fused_ordering(609) 00:13:23.105 fused_ordering(610) 00:13:23.105 fused_ordering(611) 00:13:23.105 fused_ordering(612) 00:13:23.105 fused_ordering(613) 00:13:23.105 fused_ordering(614) 00:13:23.105 fused_ordering(615) 00:13:23.364 fused_ordering(616) 00:13:23.364 fused_ordering(617) 00:13:23.364 fused_ordering(618) 00:13:23.364 fused_ordering(619) 00:13:23.364 fused_ordering(620) 00:13:23.364 fused_ordering(621) 00:13:23.364 fused_ordering(622) 00:13:23.364 fused_ordering(623) 00:13:23.364 fused_ordering(624) 00:13:23.364 fused_ordering(625) 00:13:23.364 fused_ordering(626) 00:13:23.364 fused_ordering(627) 00:13:23.364 fused_ordering(628) 00:13:23.364 fused_ordering(629) 00:13:23.364 fused_ordering(630) 00:13:23.364 fused_ordering(631) 00:13:23.364 fused_ordering(632) 00:13:23.364 fused_ordering(633) 00:13:23.364 fused_ordering(634) 00:13:23.364 fused_ordering(635) 00:13:23.364 fused_ordering(636) 00:13:23.364 fused_ordering(637) 00:13:23.364 fused_ordering(638) 00:13:23.364 fused_ordering(639) 00:13:23.364 fused_ordering(640) 00:13:23.364 fused_ordering(641) 00:13:23.364 fused_ordering(642) 00:13:23.364 fused_ordering(643) 00:13:23.364 fused_ordering(644) 00:13:23.364 fused_ordering(645) 00:13:23.364 fused_ordering(646) 00:13:23.364 fused_ordering(647) 00:13:23.364 fused_ordering(648) 00:13:23.364 fused_ordering(649) 00:13:23.364 fused_ordering(650) 00:13:23.364 fused_ordering(651) 00:13:23.364 fused_ordering(652) 00:13:23.364 fused_ordering(653) 00:13:23.364 fused_ordering(654) 00:13:23.364 fused_ordering(655) 00:13:23.364 fused_ordering(656) 00:13:23.364 fused_ordering(657) 00:13:23.364 fused_ordering(658) 00:13:23.364 fused_ordering(659) 00:13:23.364 fused_ordering(660) 00:13:23.364 fused_ordering(661) 00:13:23.364 fused_ordering(662) 00:13:23.364 fused_ordering(663) 00:13:23.364 fused_ordering(664) 00:13:23.364 fused_ordering(665) 00:13:23.364 fused_ordering(666) 00:13:23.364 fused_ordering(667) 00:13:23.364 fused_ordering(668) 00:13:23.364 fused_ordering(669) 00:13:23.364 fused_ordering(670) 00:13:23.364 fused_ordering(671) 00:13:23.364 fused_ordering(672) 00:13:23.364 fused_ordering(673) 00:13:23.364 fused_ordering(674) 00:13:23.364 fused_ordering(675) 00:13:23.364 fused_ordering(676) 00:13:23.364 fused_ordering(677) 00:13:23.364 fused_ordering(678) 00:13:23.364 fused_ordering(679) 00:13:23.364 fused_ordering(680) 00:13:23.364 fused_ordering(681) 00:13:23.364 fused_ordering(682) 00:13:23.364 fused_ordering(683) 00:13:23.364 fused_ordering(684) 00:13:23.364 fused_ordering(685) 00:13:23.364 fused_ordering(686) 00:13:23.364 fused_ordering(687) 00:13:23.364 fused_ordering(688) 00:13:23.364 fused_ordering(689) 00:13:23.364 fused_ordering(690) 00:13:23.364 fused_ordering(691) 00:13:23.364 fused_ordering(692) 00:13:23.364 fused_ordering(693) 00:13:23.364 fused_ordering(694) 00:13:23.364 fused_ordering(695) 00:13:23.364 fused_ordering(696) 00:13:23.364 fused_ordering(697) 00:13:23.364 fused_ordering(698) 00:13:23.364 fused_ordering(699) 00:13:23.364 fused_ordering(700) 00:13:23.364 fused_ordering(701) 00:13:23.364 fused_ordering(702) 00:13:23.364 fused_ordering(703) 00:13:23.364 fused_ordering(704) 00:13:23.364 fused_ordering(705) 00:13:23.364 fused_ordering(706) 00:13:23.364 fused_ordering(707) 00:13:23.364 fused_ordering(708) 00:13:23.364 fused_ordering(709) 00:13:23.364 fused_ordering(710) 00:13:23.364 fused_ordering(711) 00:13:23.364 fused_ordering(712) 00:13:23.364 fused_ordering(713) 00:13:23.364 fused_ordering(714) 00:13:23.364 fused_ordering(715) 00:13:23.364 fused_ordering(716) 00:13:23.364 fused_ordering(717) 00:13:23.364 fused_ordering(718) 00:13:23.364 fused_ordering(719) 00:13:23.364 fused_ordering(720) 00:13:23.364 fused_ordering(721) 00:13:23.364 fused_ordering(722) 00:13:23.364 fused_ordering(723) 00:13:23.364 fused_ordering(724) 00:13:23.364 fused_ordering(725) 00:13:23.364 fused_ordering(726) 00:13:23.364 fused_ordering(727) 00:13:23.364 fused_ordering(728) 00:13:23.364 fused_ordering(729) 00:13:23.364 fused_ordering(730) 00:13:23.364 fused_ordering(731) 00:13:23.364 fused_ordering(732) 00:13:23.364 fused_ordering(733) 00:13:23.364 fused_ordering(734) 00:13:23.364 fused_ordering(735) 00:13:23.364 fused_ordering(736) 00:13:23.364 fused_ordering(737) 00:13:23.364 fused_ordering(738) 00:13:23.364 fused_ordering(739) 00:13:23.364 fused_ordering(740) 00:13:23.364 fused_ordering(741) 00:13:23.364 fused_ordering(742) 00:13:23.364 fused_ordering(743) 00:13:23.364 fused_ordering(744) 00:13:23.364 fused_ordering(745) 00:13:23.364 fused_ordering(746) 00:13:23.364 fused_ordering(747) 00:13:23.364 fused_ordering(748) 00:13:23.364 fused_ordering(749) 00:13:23.364 fused_ordering(750) 00:13:23.364 fused_ordering(751) 00:13:23.364 fused_ordering(752) 00:13:23.364 fused_ordering(753) 00:13:23.364 fused_ordering(754) 00:13:23.364 fused_ordering(755) 00:13:23.364 fused_ordering(756) 00:13:23.364 fused_ordering(757) 00:13:23.364 fused_ordering(758) 00:13:23.364 fused_ordering(759) 00:13:23.364 fused_ordering(760) 00:13:23.364 fused_ordering(761) 00:13:23.364 fused_ordering(762) 00:13:23.364 fused_ordering(763) 00:13:23.364 fused_ordering(764) 00:13:23.364 fused_ordering(765) 00:13:23.364 fused_ordering(766) 00:13:23.364 fused_ordering(767) 00:13:23.364 fused_ordering(768) 00:13:23.364 fused_ordering(769) 00:13:23.364 fused_ordering(770) 00:13:23.364 fused_ordering(771) 00:13:23.364 fused_ordering(772) 00:13:23.364 fused_ordering(773) 00:13:23.364 fused_ordering(774) 00:13:23.364 fused_ordering(775) 00:13:23.364 fused_ordering(776) 00:13:23.364 fused_ordering(777) 00:13:23.364 fused_ordering(778) 00:13:23.364 fused_ordering(779) 00:13:23.364 fused_ordering(780) 00:13:23.364 fused_ordering(781) 00:13:23.364 fused_ordering(782) 00:13:23.364 fused_ordering(783) 00:13:23.364 fused_ordering(784) 00:13:23.364 fused_ordering(785) 00:13:23.364 fused_ordering(786) 00:13:23.364 fused_ordering(787) 00:13:23.364 fused_ordering(788) 00:13:23.364 fused_ordering(789) 00:13:23.364 fused_ordering(790) 00:13:23.364 fused_ordering(791) 00:13:23.364 fused_ordering(792) 00:13:23.364 fused_ordering(793) 00:13:23.364 fused_ordering(794) 00:13:23.364 fused_ordering(795) 00:13:23.364 fused_ordering(796) 00:13:23.364 fused_ordering(797) 00:13:23.364 fused_ordering(798) 00:13:23.365 fused_ordering(799) 00:13:23.365 fused_ordering(800) 00:13:23.365 fused_ordering(801) 00:13:23.365 fused_ordering(802) 00:13:23.365 fused_ordering(803) 00:13:23.365 fused_ordering(804) 00:13:23.365 fused_ordering(805) 00:13:23.365 fused_ordering(806) 00:13:23.365 fused_ordering(807) 00:13:23.365 fused_ordering(808) 00:13:23.365 fused_ordering(809) 00:13:23.365 fused_ordering(810) 00:13:23.365 fused_ordering(811) 00:13:23.365 fused_ordering(812) 00:13:23.365 fused_ordering(813) 00:13:23.365 fused_ordering(814) 00:13:23.365 fused_ordering(815) 00:13:23.365 fused_ordering(816) 00:13:23.365 fused_ordering(817) 00:13:23.365 fused_ordering(818) 00:13:23.365 fused_ordering(819) 00:13:23.365 fused_ordering(820) 00:13:23.932 fused_ordering(821) 00:13:23.932 fused_ordering(822) 00:13:23.932 fused_ordering(823) 00:13:23.932 fused_ordering(824) 00:13:23.932 fused_ordering(825) 00:13:23.932 fused_ordering(826) 00:13:23.932 fused_ordering(827) 00:13:23.932 fused_ordering(828) 00:13:23.932 fused_ordering(829) 00:13:23.932 fused_ordering(830) 00:13:23.932 fused_ordering(831) 00:13:23.932 fused_ordering(832) 00:13:23.932 fused_ordering(833) 00:13:23.933 fused_ordering(834) 00:13:23.933 fused_ordering(835) 00:13:23.933 fused_ordering(836) 00:13:23.933 fused_ordering(837) 00:13:23.933 fused_ordering(838) 00:13:23.933 fused_ordering(839) 00:13:23.933 fused_ordering(840) 00:13:23.933 fused_ordering(841) 00:13:23.933 fused_ordering(842) 00:13:23.933 fused_ordering(843) 00:13:23.933 fused_ordering(844) 00:13:23.933 fused_ordering(845) 00:13:23.933 fused_ordering(846) 00:13:23.933 fused_ordering(847) 00:13:23.933 fused_ordering(848) 00:13:23.933 fused_ordering(849) 00:13:23.933 fused_ordering(850) 00:13:23.933 fused_ordering(851) 00:13:23.933 fused_ordering(852) 00:13:23.933 fused_ordering(853) 00:13:23.933 fused_ordering(854) 00:13:23.933 fused_ordering(855) 00:13:23.933 fused_ordering(856) 00:13:23.933 fused_ordering(857) 00:13:23.933 fused_ordering(858) 00:13:23.933 fused_ordering(859) 00:13:23.933 fused_ordering(860) 00:13:23.933 fused_ordering(861) 00:13:23.933 fused_ordering(862) 00:13:23.933 fused_ordering(863) 00:13:23.933 fused_ordering(864) 00:13:23.933 fused_ordering(865) 00:13:23.933 fused_ordering(866) 00:13:23.933 fused_ordering(867) 00:13:23.933 fused_ordering(868) 00:13:23.933 fused_ordering(869) 00:13:23.933 fused_ordering(870) 00:13:23.933 fused_ordering(871) 00:13:23.933 fused_ordering(872) 00:13:23.933 fused_ordering(873) 00:13:23.933 fused_ordering(874) 00:13:23.933 fused_ordering(875) 00:13:23.933 fused_ordering(876) 00:13:23.933 fused_ordering(877) 00:13:23.933 fused_ordering(878) 00:13:23.933 fused_ordering(879) 00:13:23.933 fused_ordering(880) 00:13:23.933 fused_ordering(881) 00:13:23.933 fused_ordering(882) 00:13:23.933 fused_ordering(883) 00:13:23.933 fused_ordering(884) 00:13:23.933 fused_ordering(885) 00:13:23.933 fused_ordering(886) 00:13:23.933 fused_ordering(887) 00:13:23.933 fused_ordering(888) 00:13:23.933 fused_ordering(889) 00:13:23.933 fused_ordering(890) 00:13:23.933 fused_ordering(891) 00:13:23.933 fused_ordering(892) 00:13:23.933 fused_ordering(893) 00:13:23.933 fused_ordering(894) 00:13:23.933 fused_ordering(895) 00:13:23.933 fused_ordering(896) 00:13:23.933 fused_ordering(897) 00:13:23.933 fused_ordering(898) 00:13:23.933 fused_ordering(899) 00:13:23.933 fused_ordering(900) 00:13:23.933 fused_ordering(901) 00:13:23.933 fused_ordering(902) 00:13:23.933 fused_ordering(903) 00:13:23.933 fused_ordering(904) 00:13:23.933 fused_ordering(905) 00:13:23.933 fused_ordering(906) 00:13:23.933 fused_ordering(907) 00:13:23.933 fused_ordering(908) 00:13:23.933 fused_ordering(909) 00:13:23.933 fused_ordering(910) 00:13:23.933 fused_ordering(911) 00:13:23.933 fused_ordering(912) 00:13:23.933 fused_ordering(913) 00:13:23.933 fused_ordering(914) 00:13:23.933 fused_ordering(915) 00:13:23.933 fused_ordering(916) 00:13:23.933 fused_ordering(917) 00:13:23.933 fused_ordering(918) 00:13:23.933 fused_ordering(919) 00:13:23.933 fused_ordering(920) 00:13:23.933 fused_ordering(921) 00:13:23.933 fused_ordering(922) 00:13:23.933 fused_ordering(923) 00:13:23.933 fused_ordering(924) 00:13:23.933 fused_ordering(925) 00:13:23.933 fused_ordering(926) 00:13:23.933 fused_ordering(927) 00:13:23.933 fused_ordering(928) 00:13:23.933 fused_ordering(929) 00:13:23.933 fused_ordering(930) 00:13:23.933 fused_ordering(931) 00:13:23.933 fused_ordering(932) 00:13:23.933 fused_ordering(933) 00:13:23.933 fused_ordering(934) 00:13:23.933 fused_ordering(935) 00:13:23.933 fused_ordering(936) 00:13:23.933 fused_ordering(937) 00:13:23.933 fused_ordering(938) 00:13:23.933 fused_ordering(939) 00:13:23.933 fused_ordering(940) 00:13:23.933 fused_ordering(941) 00:13:23.933 fused_ordering(942) 00:13:23.933 fused_ordering(943) 00:13:23.933 fused_ordering(944) 00:13:23.933 fused_ordering(945) 00:13:23.933 fused_ordering(946) 00:13:23.933 fused_ordering(947) 00:13:23.933 fused_ordering(948) 00:13:23.933 fused_ordering(949) 00:13:23.933 fused_ordering(950) 00:13:23.933 fused_ordering(951) 00:13:23.933 fused_ordering(952) 00:13:23.933 fused_ordering(953) 00:13:23.933 fused_ordering(954) 00:13:23.933 fused_ordering(955) 00:13:23.933 fused_ordering(956) 00:13:23.933 fused_ordering(957) 00:13:23.933 fused_ordering(958) 00:13:23.933 fused_ordering(959) 00:13:23.933 fused_ordering(960) 00:13:23.933 fused_ordering(961) 00:13:23.933 fused_ordering(962) 00:13:23.933 fused_ordering(963) 00:13:23.933 fused_ordering(964) 00:13:23.933 fused_ordering(965) 00:13:23.933 fused_ordering(966) 00:13:23.933 fused_ordering(967) 00:13:23.933 fused_ordering(968) 00:13:23.933 fused_ordering(969) 00:13:23.933 fused_ordering(970) 00:13:23.933 fused_ordering(971) 00:13:23.933 fused_ordering(972) 00:13:23.933 fused_ordering(973) 00:13:23.933 fused_ordering(974) 00:13:23.933 fused_ordering(975) 00:13:23.933 fused_ordering(976) 00:13:23.933 fused_ordering(977) 00:13:23.933 fused_ordering(978) 00:13:23.933 fused_ordering(979) 00:13:23.933 fused_ordering(980) 00:13:23.933 fused_ordering(981) 00:13:23.933 fused_ordering(982) 00:13:23.933 fused_ordering(983) 00:13:23.933 fused_ordering(984) 00:13:23.933 fused_ordering(985) 00:13:23.933 fused_ordering(986) 00:13:23.933 fused_ordering(987) 00:13:23.933 fused_ordering(988) 00:13:23.933 fused_ordering(989) 00:13:23.933 fused_ordering(990) 00:13:23.933 fused_ordering(991) 00:13:23.933 fused_ordering(992) 00:13:23.933 fused_ordering(993) 00:13:23.933 fused_ordering(994) 00:13:23.933 fused_ordering(995) 00:13:23.933 fused_ordering(996) 00:13:23.933 fused_ordering(997) 00:13:23.933 fused_ordering(998) 00:13:23.933 fused_ordering(999) 00:13:23.933 fused_ordering(1000) 00:13:23.933 fused_ordering(1001) 00:13:23.933 fused_ordering(1002) 00:13:23.933 fused_ordering(1003) 00:13:23.933 fused_ordering(1004) 00:13:23.933 fused_ordering(1005) 00:13:23.933 fused_ordering(1006) 00:13:23.933 fused_ordering(1007) 00:13:23.933 fused_ordering(1008) 00:13:23.933 fused_ordering(1009) 00:13:23.933 fused_ordering(1010) 00:13:23.933 fused_ordering(1011) 00:13:23.933 fused_ordering(1012) 00:13:23.933 fused_ordering(1013) 00:13:23.933 fused_ordering(1014) 00:13:23.933 fused_ordering(1015) 00:13:23.933 fused_ordering(1016) 00:13:23.933 fused_ordering(1017) 00:13:23.933 fused_ordering(1018) 00:13:23.933 fused_ordering(1019) 00:13:23.933 fused_ordering(1020) 00:13:23.933 fused_ordering(1021) 00:13:23.933 fused_ordering(1022) 00:13:23.933 fused_ordering(1023) 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@99 -- # sync 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@102 -- # set +e 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:23.933 rmmod nvme_tcp 00:13:23.933 rmmod nvme_fabrics 00:13:23.933 rmmod nvme_keyring 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@106 -- # set -e 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@107 -- # return 0 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@336 -- # '[' -n 3175558 ']' 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@337 -- # killprocess 3175558 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # '[' -z 3175558 ']' 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # kill -0 3175558 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # uname 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3175558 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3175558' 00:13:23.933 killing process with pid 3175558 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@973 -- # kill 3175558 00:13:23.933 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@978 -- # wait 3175558 00:13:24.193 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:24.193 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # nvmf_fini 00:13:24.193 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@264 -- # local dev 00:13:24.193 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@267 -- # remove_target_ns 00:13:24.193 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:24.193 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:24.193 10:31:04 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@268 -- # delete_main_bridge 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@130 -- # return 0 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # _dev=0 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@41 -- # dev_map=() 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/setup.sh@284 -- # iptr 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@542 -- # iptables-save 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@542 -- # iptables-restore 00:13:26.729 00:13:26.729 real 0m10.917s 00:13:26.729 user 0m5.239s 00:13:26.729 sys 0m5.868s 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:13:26.729 ************************************ 00:13:26.729 END TEST nvmf_fused_ordering 00:13:26.729 ************************************ 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:26.729 ************************************ 00:13:26.729 START TEST nvmf_ns_masking 00:13:26.729 ************************************ 00:13:26.729 10:31:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1129 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:13:26.729 * Looking for test storage... 00:13:26.729 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lcov --version 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # IFS=.-: 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@336 -- # read -ra ver1 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # IFS=.-: 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@337 -- # read -ra ver2 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@338 -- # local 'op=<' 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@340 -- # ver1_l=2 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@341 -- # ver2_l=1 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@344 -- # case "$op" in 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@345 -- # : 1 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # decimal 1 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=1 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 1 00:13:26.729 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@365 -- # ver1[v]=1 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # decimal 2 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@353 -- # local d=2 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@355 -- # echo 2 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@366 -- # ver2[v]=2 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@368 -- # return 0 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:26.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.730 --rc genhtml_branch_coverage=1 00:13:26.730 --rc genhtml_function_coverage=1 00:13:26.730 --rc genhtml_legend=1 00:13:26.730 --rc geninfo_all_blocks=1 00:13:26.730 --rc geninfo_unexecuted_blocks=1 00:13:26.730 00:13:26.730 ' 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:26.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.730 --rc genhtml_branch_coverage=1 00:13:26.730 --rc genhtml_function_coverage=1 00:13:26.730 --rc genhtml_legend=1 00:13:26.730 --rc geninfo_all_blocks=1 00:13:26.730 --rc geninfo_unexecuted_blocks=1 00:13:26.730 00:13:26.730 ' 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:26.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.730 --rc genhtml_branch_coverage=1 00:13:26.730 --rc genhtml_function_coverage=1 00:13:26.730 --rc genhtml_legend=1 00:13:26.730 --rc geninfo_all_blocks=1 00:13:26.730 --rc geninfo_unexecuted_blocks=1 00:13:26.730 00:13:26.730 ' 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:26.730 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:26.730 --rc genhtml_branch_coverage=1 00:13:26.730 --rc genhtml_function_coverage=1 00:13:26.730 --rc genhtml_legend=1 00:13:26.730 --rc geninfo_all_blocks=1 00:13:26.730 --rc geninfo_unexecuted_blocks=1 00:13:26.730 00:13:26.730 ' 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@15 -- # shopt -s extglob 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@50 -- # : 0 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:26.730 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=f364f2f2-cd28-44fb-8675-b24fc591bceb 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=5fd8728e-f0e6-41b1-9837-ea967a5a9119 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=94c5cd06-9e6f-4f46-9d0e-0cfbe90cf800 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # remove_target_ns 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:26.730 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:26.731 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:26.731 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:26.731 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:26.731 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # xtrace_disable 00:13:26.731 10:31:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:33.301 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:33.301 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@131 -- # pci_devs=() 00:13:33.301 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:33.301 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:33.301 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:33.301 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:33.301 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:33.301 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@135 -- # net_devs=() 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@136 -- # e810=() 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@136 -- # local -ga e810 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@137 -- # x722=() 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@137 -- # local -ga x722 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@138 -- # mlx=() 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@138 -- # local -ga mlx 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:33.302 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:33.302 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:33.302 Found net devices under 0000:86:00.0: cvl_0_0 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:33.302 Found net devices under 0000:86:00.1: cvl_0_1 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@262 -- # is_hw=yes 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@257 -- # create_target_ns 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@28 -- # local -g _dev 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # ips=() 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:33.302 10:31:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:33.302 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772161 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:33.303 10.0.0.1 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@11 -- # local val=167772162 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:33.303 10.0.0.2 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:33.303 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.303 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.451 ms 00:13:33.303 00:13:33.303 --- 10.0.0.1 ping statistics --- 00:13:33.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.303 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=target0 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:13:33.303 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.303 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.117 ms 00:13:33.303 00:13:33.303 --- 10.0.0.2 ping statistics --- 00:13:33.303 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.303 rtt min/avg/max/mdev = 0.117/0.117/0.117/0.000 ms 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # return 0 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:33.303 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # return 1 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev= 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@169 -- # return 0 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=target0 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@107 -- # local dev=target1 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@109 -- # return 1 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@168 -- # dev= 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@169 -- # return 0 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@328 -- # nvmfpid=3179578 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # waitforlisten 3179578 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3179578 ']' 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.304 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:33.304 [2024-11-20 10:31:13.334488] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:13:33.304 [2024-11-20 10:31:13.334534] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.304 [2024-11-20 10:31:13.412393] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:33.304 [2024-11-20 10:31:13.453073] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.304 [2024-11-20 10:31:13.453109] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.304 [2024-11-20 10:31:13.453117] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.304 [2024-11-20 10:31:13.453123] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.304 [2024-11-20 10:31:13.453128] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.304 [2024-11-20 10:31:13.453676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:13:33.304 [2024-11-20 10:31:13.757587] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:13:33.304 10:31:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:13:33.304 Malloc1 00:13:33.304 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:13:33.563 Malloc2 00:13:33.563 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:33.822 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:13:34.080 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:34.080 [2024-11-20 10:31:14.789306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:34.338 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:13:34.338 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 94c5cd06-9e6f-4f46-9d0e-0cfbe90cf800 -a 10.0.0.2 -s 4420 -i 4 00:13:34.338 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:13:34.338 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:34.338 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:34.339 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:13:34.339 10:31:14 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:36.240 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:36.240 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:36.240 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:36.240 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:36.240 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:36.241 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:36.241 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:36.241 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:36.500 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:36.500 10:31:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:36.500 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:13:36.500 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:36.500 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:36.500 [ 0]:0x1 00:13:36.500 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:36.500 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:36.500 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=956d964085a64b8fb92d071b9ff477fa 00:13:36.500 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 956d964085a64b8fb92d071b9ff477fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:36.500 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:13:36.758 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:13:36.758 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:36.758 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:36.758 [ 0]:0x1 00:13:36.758 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:36.758 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:36.758 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=956d964085a64b8fb92d071b9ff477fa 00:13:36.758 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 956d964085a64b8fb92d071b9ff477fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:36.758 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:13:36.758 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:36.758 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:36.758 [ 1]:0x2 00:13:36.758 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:36.758 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:36.758 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8e20da471af4f7692e919f6d1283032 00:13:36.758 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8e20da471af4f7692e919f6d1283032 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:36.758 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:13:36.758 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:36.758 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:36.758 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:37.017 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:13:37.275 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:13:37.275 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 94c5cd06-9e6f-4f46-9d0e-0cfbe90cf800 -a 10.0.0.2 -s 4420 -i 4 00:13:37.275 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:13:37.275 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:37.275 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:37.275 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 1 ]] 00:13:37.275 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=1 00:13:37.275 10:31:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:39.844 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:39.844 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:39.844 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:39.844 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:13:39.844 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:39.844 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:39.844 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:39.844 10:31:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:39.844 [ 0]:0x2 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8e20da471af4f7692e919f6d1283032 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8e20da471af4f7692e919f6d1283032 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:39.844 [ 0]:0x1 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=956d964085a64b8fb92d071b9ff477fa 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 956d964085a64b8fb92d071b9ff477fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:39.844 [ 1]:0x2 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8e20da471af4f7692e919f6d1283032 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8e20da471af4f7692e919f6d1283032 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:39.844 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:40.102 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:13:40.102 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:40.102 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:40.102 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:40.102 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.102 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:40.102 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:40.102 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:40.102 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.102 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:40.102 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:40.102 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.102 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:40.102 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.102 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:40.102 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:40.102 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:40.102 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:40.103 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:13:40.103 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:40.103 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:40.103 [ 0]:0x2 00:13:40.103 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:40.103 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:40.103 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8e20da471af4f7692e919f6d1283032 00:13:40.103 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8e20da471af4f7692e919f6d1283032 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:40.103 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:13:40.103 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:40.361 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:40.361 10:31:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:40.619 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:13:40.619 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I 94c5cd06-9e6f-4f46-9d0e-0cfbe90cf800 -a 10.0.0.2 -s 4420 -i 4 00:13:40.619 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:13:40.619 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1202 -- # local i=0 00:13:40.619 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:13:40.619 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:13:40.619 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:13:40.619 10:31:21 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1209 -- # sleep 2 00:13:43.145 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:13:43.145 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:13:43.145 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:13:43.145 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:13:43.145 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:13:43.145 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1212 -- # return 0 00:13:43.145 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:43.146 [ 0]:0x1 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=956d964085a64b8fb92d071b9ff477fa 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 956d964085a64b8fb92d071b9ff477fa != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:43.146 [ 1]:0x2 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8e20da471af4f7692e919f6d1283032 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8e20da471af4f7692e919f6d1283032 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:43.146 [ 0]:0x2 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8e20da471af4f7692e919f6d1283032 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8e20da471af4f7692e919f6d1283032 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:43.146 10:31:23 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:13:43.404 [2024-11-20 10:31:24.003559] nvmf_rpc.c:1870:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:13:43.404 request: 00:13:43.404 { 00:13:43.404 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:43.404 "nsid": 2, 00:13:43.404 "host": "nqn.2016-06.io.spdk:host1", 00:13:43.404 "method": "nvmf_ns_remove_host", 00:13:43.404 "req_id": 1 00:13:43.404 } 00:13:43.404 Got JSON-RPC error response 00:13:43.404 response: 00:13:43.404 { 00:13:43.404 "code": -32602, 00:13:43.404 "message": "Invalid parameters" 00:13:43.404 } 00:13:43.404 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:43.404 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:43.404 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:43.404 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:43.404 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:13:43.404 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:43.404 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg ns_is_visible 0x1 00:13:43.404 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=ns_is_visible 00:13:43.404 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.404 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t ns_is_visible 00:13:43.405 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:43.405 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # ns_is_visible 0x1 00:13:43.405 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.405 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:13:43.405 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:13:43.405 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.405 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:13:43.405 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.405 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:43.405 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:43.405 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:43.405 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:43.405 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:13:43.405 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:13:43.405 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:13:43.405 [ 0]:0x2 00:13:43.405 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:13:43.405 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:13:43.663 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=a8e20da471af4f7692e919f6d1283032 00:13:43.663 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ a8e20da471af4f7692e919f6d1283032 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:13:43.663 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:13:43.663 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:43.663 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:43.663 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=3181577 00:13:43.663 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:13:43.663 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 3181577 /var/tmp/host.sock 00:13:43.663 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:13:43.663 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@835 -- # '[' -z 3181577 ']' 00:13:43.663 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:13:43.663 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:43.663 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:13:43.663 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:13:43.663 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:43.663 10:31:24 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:43.663 [2024-11-20 10:31:24.236079] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:13:43.663 [2024-11-20 10:31:24.236126] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3181577 ] 00:13:43.663 [2024-11-20 10:31:24.311542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:43.663 [2024-11-20 10:31:24.353781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:44.597 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:44.597 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@868 -- # return 0 00:13:44.597 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:44.597 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:44.856 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid f364f2f2-cd28-44fb-8675-b24fc591bceb 00:13:44.856 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:13:44.856 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F364F2F2CD2844FB8675B24FC591BCEB -i 00:13:45.113 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 5fd8728e-f0e6-41b1-9837-ea967a5a9119 00:13:45.113 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:13:45.113 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 5FD8728EF0E641B19837EA967A5A9119 -i 00:13:45.371 10:31:25 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:13:45.371 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:13:45.644 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:45.644 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:13:46.212 nvme0n1 00:13:46.212 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:46.212 10:31:26 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:13:46.470 nvme1n2 00:13:46.470 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:13:46.470 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:13:46.470 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:13:46.470 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:13:46.470 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:46.728 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:13:46.728 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:13:46.728 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:13:46.728 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:13:46.987 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ f364f2f2-cd28-44fb-8675-b24fc591bceb == \f\3\6\4\f\2\f\2\-\c\d\2\8\-\4\4\f\b\-\8\6\7\5\-\b\2\4\f\c\5\9\1\b\c\e\b ]] 00:13:46.987 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:13:46.987 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:13:46.987 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:13:46.987 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 5fd8728e-f0e6-41b1-9837-ea967a5a9119 == \5\f\d\8\7\2\8\e\-\f\0\e\6\-\4\1\b\1\-\9\8\3\7\-\e\a\9\6\7\a\5\a\9\1\1\9 ]] 00:13:46.987 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@137 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:47.246 10:31:27 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:13:47.505 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # uuid2nguid f364f2f2-cd28-44fb-8675-b24fc591bceb 00:13:47.505 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:13:47.505 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F364F2F2CD2844FB8675B24FC591BCEB 00:13:47.505 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@652 -- # local es=0 00:13:47.505 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F364F2F2CD2844FB8675B24FC591BCEB 00:13:47.505 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.505 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.505 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.505 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.505 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.505 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:13:47.505 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:47.505 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:13:47.505 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 invalid -n 1 -g F364F2F2CD2844FB8675B24FC591BCEB 00:13:47.763 [2024-11-20 10:31:28.243077] bdev.c:8278:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: invalid 00:13:47.763 [2024-11-20 10:31:28.243114] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1: bdev invalid cannot be opened, error=-19 00:13:47.763 [2024-11-20 10:31:28.243122] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:13:47.763 request: 00:13:47.763 { 00:13:47.763 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:13:47.763 "namespace": { 00:13:47.763 "bdev_name": "invalid", 00:13:47.763 "nsid": 1, 00:13:47.763 "nguid": "F364F2F2CD2844FB8675B24FC591BCEB", 00:13:47.763 "no_auto_visible": false 00:13:47.763 }, 00:13:47.763 "method": "nvmf_subsystem_add_ns", 00:13:47.763 "req_id": 1 00:13:47.763 } 00:13:47.763 Got JSON-RPC error response 00:13:47.763 response: 00:13:47.763 { 00:13:47.763 "code": -32602, 00:13:47.763 "message": "Invalid parameters" 00:13:47.763 } 00:13:47.763 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@655 -- # es=1 00:13:47.763 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:13:47.763 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:13:47.763 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:13:47.763 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # uuid2nguid f364f2f2-cd28-44fb-8675-b24fc591bceb 00:13:47.763 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@538 -- # tr -d - 00:13:47.764 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g F364F2F2CD2844FB8675B24FC591BCEB -i 00:13:47.764 10:31:28 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@143 -- # sleep 2s 00:13:50.294 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # hostrpc bdev_get_bdevs 00:13:50.294 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # jq length 00:13:50.294 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:13:50.294 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@144 -- # (( 0 == 0 )) 00:13:50.294 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@146 -- # killprocess 3181577 00:13:50.294 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3181577 ']' 00:13:50.294 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3181577 00:13:50.294 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:50.294 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:50.294 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3181577 00:13:50.294 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:13:50.294 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:13:50.294 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3181577' 00:13:50.294 killing process with pid 3181577 00:13:50.294 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3181577 00:13:50.294 10:31:30 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3181577 00:13:50.553 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:50.553 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:13:50.553 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@150 -- # nvmftestfini 00:13:50.553 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # nvmfcleanup 00:13:50.553 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@99 -- # sync 00:13:50.553 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:13:50.553 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@102 -- # set +e 00:13:50.553 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@103 -- # for i in {1..20} 00:13:50.553 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:13:50.553 rmmod nvme_tcp 00:13:50.553 rmmod nvme_fabrics 00:13:50.553 rmmod nvme_keyring 00:13:50.811 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:13:50.811 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@106 -- # set -e 00:13:50.811 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@107 -- # return 0 00:13:50.811 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@336 -- # '[' -n 3179578 ']' 00:13:50.811 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@337 -- # killprocess 3179578 00:13:50.811 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # '[' -z 3179578 ']' 00:13:50.811 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # kill -0 3179578 00:13:50.811 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # uname 00:13:50.811 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:50.811 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3179578 00:13:50.811 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:50.811 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:50.811 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3179578' 00:13:50.811 killing process with pid 3179578 00:13:50.811 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@973 -- # kill 3179578 00:13:50.811 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@978 -- # wait 3179578 00:13:51.070 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:13:51.070 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # nvmf_fini 00:13:51.070 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@264 -- # local dev 00:13:51.070 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@267 -- # remove_target_ns 00:13:51.070 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:51.071 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:51.071 10:31:31 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@268 -- # delete_main_bridge 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@130 -- # return 0 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # _dev=0 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@41 -- # dev_map=() 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/setup.sh@284 -- # iptr 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@542 -- # iptables-save 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@542 -- # iptables-restore 00:13:52.975 00:13:52.975 real 0m26.693s 00:13:52.975 user 0m32.440s 00:13:52.975 sys 0m7.232s 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:13:52.975 ************************************ 00:13:52.975 END TEST nvmf_ns_masking 00:13:52.975 ************************************ 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:52.975 10:31:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:53.235 ************************************ 00:13:53.235 START TEST nvmf_nvme_cli 00:13:53.235 ************************************ 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:13:53.235 * Looking for test storage... 00:13:53.235 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lcov --version 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@333 -- # local ver1 ver1_l 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@334 -- # local ver2 ver2_l 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # IFS=.-: 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@336 -- # read -ra ver1 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # IFS=.-: 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@337 -- # read -ra ver2 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@338 -- # local 'op=<' 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@340 -- # ver1_l=2 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@341 -- # ver2_l=1 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@344 -- # case "$op" in 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@345 -- # : 1 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v = 0 )) 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # decimal 1 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=1 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 1 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@365 -- # ver1[v]=1 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # decimal 2 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@353 -- # local d=2 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@355 -- # echo 2 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@366 -- # ver2[v]=2 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@368 -- # return 0 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:13:53.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.235 --rc genhtml_branch_coverage=1 00:13:53.235 --rc genhtml_function_coverage=1 00:13:53.235 --rc genhtml_legend=1 00:13:53.235 --rc geninfo_all_blocks=1 00:13:53.235 --rc geninfo_unexecuted_blocks=1 00:13:53.235 00:13:53.235 ' 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:13:53.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.235 --rc genhtml_branch_coverage=1 00:13:53.235 --rc genhtml_function_coverage=1 00:13:53.235 --rc genhtml_legend=1 00:13:53.235 --rc geninfo_all_blocks=1 00:13:53.235 --rc geninfo_unexecuted_blocks=1 00:13:53.235 00:13:53.235 ' 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:13:53.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.235 --rc genhtml_branch_coverage=1 00:13:53.235 --rc genhtml_function_coverage=1 00:13:53.235 --rc genhtml_legend=1 00:13:53.235 --rc geninfo_all_blocks=1 00:13:53.235 --rc geninfo_unexecuted_blocks=1 00:13:53.235 00:13:53.235 ' 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:13:53.235 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:13:53.235 --rc genhtml_branch_coverage=1 00:13:53.235 --rc genhtml_function_coverage=1 00:13:53.235 --rc genhtml_legend=1 00:13:53.235 --rc geninfo_all_blocks=1 00:13:53.235 --rc geninfo_unexecuted_blocks=1 00:13:53.235 00:13:53.235 ' 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@15 -- # shopt -s extglob 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.235 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@50 -- # : 0 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:13:53.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@54 -- # have_pci_nics=0 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # prepare_net_devs 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # local -g is_hw=no 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # remove_target_ns 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_target_ns 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # xtrace_disable 00:13:53.236 10:31:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@131 -- # pci_devs=() 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@131 -- # local -a pci_devs 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@132 -- # pci_net_devs=() 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@133 -- # pci_drivers=() 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@133 -- # local -A pci_drivers 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@135 -- # net_devs=() 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@135 -- # local -ga net_devs 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@136 -- # e810=() 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@136 -- # local -ga e810 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@137 -- # x722=() 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@137 -- # local -ga x722 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@138 -- # mlx=() 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@138 -- # local -ga mlx 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:59.882 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:59.882 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:59.882 Found net devices under 0000:86:00.0: cvl_0_0 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # [[ up == up ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:59.882 Found net devices under 0000:86:00.1: cvl_0_1 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@262 -- # is_hw=yes 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@257 -- # create_target_ns 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@27 -- # local -gA dev_map 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@28 -- # local -g _dev 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@44 -- # ips=() 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:13:59.882 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@11 -- # local val=167772161 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:13:59.883 10.0.0.1 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@11 -- # local val=167772162 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:13:59.883 10.0.0.2 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@38 -- # ping_ips 1 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:13:59.883 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:59.883 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.390 ms 00:13:59.883 00:13:59.883 --- 10.0.0.1 ping statistics --- 00:13:59.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.883 rtt min/avg/max/mdev = 0.390/0.390/0.390/0.000 ms 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=target0 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:13:59.883 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:59.883 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.218 ms 00:13:59.883 00:13:59.883 --- 10.0.0.2 ping statistics --- 00:13:59.883 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:59.883 rtt min/avg/max/mdev = 0.218/0.218/0.218/0.000 ms 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair++ )) 00:13:59.883 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # return 0 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=initiator0 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=initiator1 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # return 1 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev= 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@169 -- # return 0 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev target0 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=target0 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # get_net_dev target1 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@107 -- # local dev=target1 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@109 -- # return 1 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@168 -- # dev= 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@169 -- # return 0 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:13:59.884 10:31:39 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@726 -- # xtrace_disable 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@328 -- # nvmfpid=3186322 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # waitforlisten 3186322 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@835 -- # '[' -z 3186322 ']' 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:59.884 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.884 [2024-11-20 10:31:40.071209] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:13:59.884 [2024-11-20 10:31:40.071262] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:59.884 [2024-11-20 10:31:40.151499] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:59.884 [2024-11-20 10:31:40.194802] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:59.884 [2024-11-20 10:31:40.194839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:59.884 [2024-11-20 10:31:40.194846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:59.884 [2024-11-20 10:31:40.194852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:59.884 [2024-11-20 10:31:40.194857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:59.884 [2024-11-20 10:31:40.196374] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:13:59.884 [2024-11-20 10:31:40.196410] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:13:59.884 [2024-11-20 10:31:40.196515] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:13:59.884 [2024-11-20 10:31:40.196516] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@868 -- # return 0 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@732 -- # xtrace_disable 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.884 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.885 [2024-11-20 10:31:40.332075] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.885 Malloc0 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.885 Malloc1 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.885 [2024-11-20 10:31:40.427668] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:59.885 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:00.153 00:14:00.153 Discovery Log Number of Records 2, Generation counter 2 00:14:00.153 =====Discovery Log Entry 0====== 00:14:00.153 trtype: tcp 00:14:00.153 adrfam: ipv4 00:14:00.153 subtype: current discovery subsystem 00:14:00.153 treq: not required 00:14:00.153 portid: 0 00:14:00.153 trsvcid: 4420 00:14:00.153 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:00.153 traddr: 10.0.0.2 00:14:00.153 eflags: explicit discovery connections, duplicate discovery information 00:14:00.153 sectype: none 00:14:00.153 =====Discovery Log Entry 1====== 00:14:00.153 trtype: tcp 00:14:00.153 adrfam: ipv4 00:14:00.153 subtype: nvme subsystem 00:14:00.153 treq: not required 00:14:00.153 portid: 0 00:14:00.153 trsvcid: 4420 00:14:00.153 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:00.153 traddr: 10.0.0.2 00:14:00.153 eflags: none 00:14:00.153 sectype: none 00:14:00.153 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:00.153 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:00.153 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:14:00.153 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:14:00.153 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:14:00.153 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:14:00.153 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:14:00.153 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:14:00.153 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:14:00.153 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:00.153 10:31:40 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:01.088 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:01.346 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1202 -- # local i=0 00:14:01.346 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:14:01.346 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1204 -- # [[ -n 2 ]] 00:14:01.346 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # nvme_device_counter=2 00:14:01.346 10:31:41 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1209 -- # sleep 2 00:14:03.250 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:14:03.250 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:14:03.250 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:14:03.250 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1211 -- # nvme_devices=2 00:14:03.250 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:14:03.250 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1212 -- # return 0 00:14:03.250 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:03.250 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:14:03.250 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:14:03.250 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:14:03.509 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:14:03.509 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:14:03.509 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:14:03.509 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:14:03.509 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:03.509 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n1 00:14:03.509 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:14:03.509 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:03.509 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n2 00:14:03.509 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:14:03.509 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n1 00:14:03.509 /dev/nvme0n2 ]] 00:14:03.509 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:03.509 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:03.509 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@362 -- # local dev _ 00:14:03.509 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:14:03.509 10:31:43 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@361 -- # nvme list 00:14:03.509 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ Node == /dev/nvme* ]] 00:14:03.509 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:14:03.509 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ --------------------- == /dev/nvme* ]] 00:14:03.509 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:14:03.509 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:03.509 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n1 00:14:03.509 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:14:03.509 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@365 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:03.509 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # echo /dev/nvme0n2 00:14:03.509 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@364 -- # read -r dev _ 00:14:03.509 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:03.509 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:03.768 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1223 -- # local i=0 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1235 -- # return 0 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # nvmfcleanup 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@99 -- # sync 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@102 -- # set +e 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@103 -- # for i in {1..20} 00:14:03.768 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:14:03.768 rmmod nvme_tcp 00:14:03.768 rmmod nvme_fabrics 00:14:04.026 rmmod nvme_keyring 00:14:04.026 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:14:04.026 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@106 -- # set -e 00:14:04.026 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@107 -- # return 0 00:14:04.026 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@336 -- # '[' -n 3186322 ']' 00:14:04.026 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@337 -- # killprocess 3186322 00:14:04.027 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # '[' -z 3186322 ']' 00:14:04.027 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # kill -0 3186322 00:14:04.027 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # uname 00:14:04.027 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:04.027 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3186322 00:14:04.027 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:04.027 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:04.027 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3186322' 00:14:04.027 killing process with pid 3186322 00:14:04.027 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@973 -- # kill 3186322 00:14:04.027 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@978 -- # wait 3186322 00:14:04.285 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:14:04.285 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # nvmf_fini 00:14:04.285 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@264 -- # local dev 00:14:04.285 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@267 -- # remove_target_ns 00:14:04.285 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:14:04.285 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:14:04.285 10:31:44 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_target_ns 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@268 -- # delete_main_bridge 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@130 -- # return 0 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@41 -- # _dev=0 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@41 -- # dev_map=() 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/setup.sh@284 -- # iptr 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@542 -- # iptables-save 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:14:06.190 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@542 -- # iptables-restore 00:14:06.190 00:14:06.190 real 0m13.160s 00:14:06.190 user 0m20.091s 00:14:06.190 sys 0m5.170s 00:14:06.191 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:06.191 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:06.191 ************************************ 00:14:06.191 END TEST nvmf_nvme_cli 00:14:06.191 ************************************ 00:14:06.191 10:31:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:06.191 10:31:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:06.191 10:31:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:06.191 10:31:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:06.191 10:31:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:06.450 ************************************ 00:14:06.450 START TEST nvmf_vfio_user 00:14:06.450 ************************************ 00:14:06.450 10:31:46 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:06.450 * Looking for test storage... 00:14:06.450 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lcov --version 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # IFS=.-: 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@336 -- # read -ra ver1 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # IFS=.-: 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@337 -- # read -ra ver2 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@338 -- # local 'op=<' 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@340 -- # ver1_l=2 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@341 -- # ver2_l=1 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@344 -- # case "$op" in 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@345 -- # : 1 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # decimal 1 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=1 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 1 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@365 -- # ver1[v]=1 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # decimal 2 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@353 -- # local d=2 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@355 -- # echo 2 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@366 -- # ver2[v]=2 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@368 -- # return 0 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:06.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.450 --rc genhtml_branch_coverage=1 00:14:06.450 --rc genhtml_function_coverage=1 00:14:06.450 --rc genhtml_legend=1 00:14:06.450 --rc geninfo_all_blocks=1 00:14:06.450 --rc geninfo_unexecuted_blocks=1 00:14:06.450 00:14:06.450 ' 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:06.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.450 --rc genhtml_branch_coverage=1 00:14:06.450 --rc genhtml_function_coverage=1 00:14:06.450 --rc genhtml_legend=1 00:14:06.450 --rc geninfo_all_blocks=1 00:14:06.450 --rc geninfo_unexecuted_blocks=1 00:14:06.450 00:14:06.450 ' 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:06.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.450 --rc genhtml_branch_coverage=1 00:14:06.450 --rc genhtml_function_coverage=1 00:14:06.450 --rc genhtml_legend=1 00:14:06.450 --rc geninfo_all_blocks=1 00:14:06.450 --rc geninfo_unexecuted_blocks=1 00:14:06.450 00:14:06.450 ' 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:06.450 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:06.450 --rc genhtml_branch_coverage=1 00:14:06.450 --rc genhtml_function_coverage=1 00:14:06.450 --rc genhtml_legend=1 00:14:06.450 --rc geninfo_all_blocks=1 00:14:06.450 --rc geninfo_unexecuted_blocks=1 00:14:06.450 00:14:06.450 ' 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@15 -- # shopt -s extglob 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:06.450 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@50 -- # : 0 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:06.451 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3187611 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3187611' 00:14:06.451 Process pid: 3187611 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3187611 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3187611 ']' 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:06.451 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:06.451 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:06.710 [2024-11-20 10:31:47.194199] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:14:06.710 [2024-11-20 10:31:47.194247] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:06.710 [2024-11-20 10:31:47.269693] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:06.710 [2024-11-20 10:31:47.312808] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:06.710 [2024-11-20 10:31:47.312842] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:06.710 [2024-11-20 10:31:47.312849] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:06.710 [2024-11-20 10:31:47.312855] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:06.710 [2024-11-20 10:31:47.312860] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:06.710 [2024-11-20 10:31:47.314340] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:06.710 [2024-11-20 10:31:47.314375] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:06.710 [2024-11-20 10:31:47.314480] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:06.710 [2024-11-20 10:31:47.314481] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:06.710 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:06.710 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:06.710 10:31:47 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:08.084 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:08.084 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:08.084 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:08.084 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:08.084 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:08.084 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:08.343 Malloc1 00:14:08.343 10:31:48 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:08.343 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:08.601 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:08.859 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:08.859 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:08.859 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:09.117 Malloc2 00:14:09.117 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:09.117 10:31:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:09.376 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:09.637 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:09.637 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:09.637 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:09.637 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:09.637 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:09.637 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:09.637 [2024-11-20 10:31:50.260377] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:14:09.637 [2024-11-20 10:31:50.260409] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3188102 ] 00:14:09.637 [2024-11-20 10:31:50.300707] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:09.637 [2024-11-20 10:31:50.309513] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:09.637 [2024-11-20 10:31:50.309534] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f9fcee34000 00:14:09.637 [2024-11-20 10:31:50.310511] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:09.637 [2024-11-20 10:31:50.311512] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:09.637 [2024-11-20 10:31:50.312520] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:09.637 [2024-11-20 10:31:50.313524] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:09.637 [2024-11-20 10:31:50.314530] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:09.637 [2024-11-20 10:31:50.315532] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:09.637 [2024-11-20 10:31:50.316536] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:09.638 [2024-11-20 10:31:50.317541] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:09.638 [2024-11-20 10:31:50.318549] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:09.638 [2024-11-20 10:31:50.318559] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f9fcee29000 00:14:09.638 [2024-11-20 10:31:50.319476] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:09.638 [2024-11-20 10:31:50.328923] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:09.638 [2024-11-20 10:31:50.328946] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to connect adminq (no timeout) 00:14:09.638 [2024-11-20 10:31:50.333706] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:09.638 [2024-11-20 10:31:50.333744] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:09.638 [2024-11-20 10:31:50.333818] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for connect adminq (no timeout) 00:14:09.638 [2024-11-20 10:31:50.333834] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs (no timeout) 00:14:09.638 [2024-11-20 10:31:50.333839] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read vs wait for vs (no timeout) 00:14:09.638 [2024-11-20 10:31:50.334697] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:09.638 [2024-11-20 10:31:50.334706] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap (no timeout) 00:14:09.638 [2024-11-20 10:31:50.334712] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to read cap wait for cap (no timeout) 00:14:09.638 [2024-11-20 10:31:50.335707] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:09.638 [2024-11-20 10:31:50.335716] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en (no timeout) 00:14:09.638 [2024-11-20 10:31:50.335722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:09.638 [2024-11-20 10:31:50.336713] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:09.638 [2024-11-20 10:31:50.336722] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:09.638 [2024-11-20 10:31:50.337715] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:09.638 [2024-11-20 10:31:50.337725] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:09.638 [2024-11-20 10:31:50.337730] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:09.638 [2024-11-20 10:31:50.337736] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:09.638 [2024-11-20 10:31:50.337844] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Setting CC.EN = 1 00:14:09.638 [2024-11-20 10:31:50.337848] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:09.638 [2024-11-20 10:31:50.337852] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:09.638 [2024-11-20 10:31:50.338728] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:09.638 [2024-11-20 10:31:50.339730] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:09.638 [2024-11-20 10:31:50.340737] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:09.638 [2024-11-20 10:31:50.341742] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:09.638 [2024-11-20 10:31:50.341841] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:09.638 [2024-11-20 10:31:50.342751] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:09.638 [2024-11-20 10:31:50.342758] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:09.638 [2024-11-20 10:31:50.342762] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:09.638 [2024-11-20 10:31:50.342779] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller (no timeout) 00:14:09.638 [2024-11-20 10:31:50.342786] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:09.638 [2024-11-20 10:31:50.342802] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:09.638 [2024-11-20 10:31:50.342807] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:09.638 [2024-11-20 10:31:50.342811] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:09.638 [2024-11-20 10:31:50.342825] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:09.638 [2024-11-20 10:31:50.342866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:09.638 [2024-11-20 10:31:50.342875] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_xfer_size 131072 00:14:09.638 [2024-11-20 10:31:50.342880] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] MDTS max_xfer_size 131072 00:14:09.638 [2024-11-20 10:31:50.342884] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] CNTLID 0x0001 00:14:09.638 [2024-11-20 10:31:50.342888] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:09.638 [2024-11-20 10:31:50.342894] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] transport max_sges 1 00:14:09.638 [2024-11-20 10:31:50.342899] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] fuses compare and write: 1 00:14:09.638 [2024-11-20 10:31:50.342903] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to configure AER (timeout 30000 ms) 00:14:09.638 [2024-11-20 10:31:50.342912] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:09.638 [2024-11-20 10:31:50.342921] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:09.638 [2024-11-20 10:31:50.342935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:09.638 [2024-11-20 10:31:50.342944] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:09.638 [2024-11-20 10:31:50.342952] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:09.638 [2024-11-20 10:31:50.342959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:09.638 [2024-11-20 10:31:50.342968] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:09.638 [2024-11-20 10:31:50.342973] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:09.638 [2024-11-20 10:31:50.342979] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:09.638 [2024-11-20 10:31:50.342987] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:09.638 [2024-11-20 10:31:50.342996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:09.638 [2024-11-20 10:31:50.343003] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Controller adjusted keep alive timeout to 0 ms 00:14:09.638 [2024-11-20 10:31:50.343008] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:09.638 [2024-11-20 10:31:50.343013] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set number of queues (timeout 30000 ms) 00:14:09.638 [2024-11-20 10:31:50.343019] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:09.638 [2024-11-20 10:31:50.343027] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:09.638 [2024-11-20 10:31:50.343041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:09.638 [2024-11-20 10:31:50.343090] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify active ns (timeout 30000 ms) 00:14:09.638 [2024-11-20 10:31:50.343097] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:09.638 [2024-11-20 10:31:50.343104] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:09.638 [2024-11-20 10:31:50.343108] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:09.638 [2024-11-20 10:31:50.343111] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:09.638 [2024-11-20 10:31:50.343117] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:09.638 [2024-11-20 10:31:50.343130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:09.638 [2024-11-20 10:31:50.343139] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Namespace 1 was added 00:14:09.638 [2024-11-20 10:31:50.343147] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns (timeout 30000 ms) 00:14:09.638 [2024-11-20 10:31:50.343153] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:09.638 [2024-11-20 10:31:50.343159] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:09.638 [2024-11-20 10:31:50.343163] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:09.638 [2024-11-20 10:31:50.343166] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:09.638 [2024-11-20 10:31:50.343171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:09.639 [2024-11-20 10:31:50.343196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:09.639 [2024-11-20 10:31:50.343215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:09.639 [2024-11-20 10:31:50.343222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:09.639 [2024-11-20 10:31:50.343228] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:09.639 [2024-11-20 10:31:50.343232] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:09.639 [2024-11-20 10:31:50.343235] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:09.639 [2024-11-20 10:31:50.343240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:09.639 [2024-11-20 10:31:50.343251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:09.639 [2024-11-20 10:31:50.343258] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:09.639 [2024-11-20 10:31:50.343264] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:09.639 [2024-11-20 10:31:50.343271] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set supported features (timeout 30000 ms) 00:14:09.639 [2024-11-20 10:31:50.343276] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:09.639 [2024-11-20 10:31:50.343280] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:09.639 [2024-11-20 10:31:50.343285] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to set host ID (timeout 30000 ms) 00:14:09.639 [2024-11-20 10:31:50.343289] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:09.639 [2024-11-20 10:31:50.343293] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to transport ready (timeout 30000 ms) 00:14:09.639 [2024-11-20 10:31:50.343298] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] setting state to ready (no timeout) 00:14:09.639 [2024-11-20 10:31:50.343314] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:09.639 [2024-11-20 10:31:50.343322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:09.639 [2024-11-20 10:31:50.343333] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:09.639 [2024-11-20 10:31:50.343345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:09.639 [2024-11-20 10:31:50.343355] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:09.639 [2024-11-20 10:31:50.343365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:09.639 [2024-11-20 10:31:50.343374] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:09.639 [2024-11-20 10:31:50.343386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:09.639 [2024-11-20 10:31:50.343400] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:09.639 [2024-11-20 10:31:50.343404] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:09.639 [2024-11-20 10:31:50.343407] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:09.639 [2024-11-20 10:31:50.343410] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:09.639 [2024-11-20 10:31:50.343413] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:09.639 [2024-11-20 10:31:50.343418] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:09.639 [2024-11-20 10:31:50.343425] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:09.639 [2024-11-20 10:31:50.343429] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:09.639 [2024-11-20 10:31:50.343432] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:09.639 [2024-11-20 10:31:50.343437] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:09.639 [2024-11-20 10:31:50.343443] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:09.639 [2024-11-20 10:31:50.343447] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:09.639 [2024-11-20 10:31:50.343449] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:09.639 [2024-11-20 10:31:50.343455] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:09.639 [2024-11-20 10:31:50.343461] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:09.639 [2024-11-20 10:31:50.343465] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:09.639 [2024-11-20 10:31:50.343468] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:09.639 [2024-11-20 10:31:50.343473] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:09.639 [2024-11-20 10:31:50.343479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:09.639 [2024-11-20 10:31:50.343491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:09.639 [2024-11-20 10:31:50.343501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:09.639 [2024-11-20 10:31:50.343507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:09.639 ===================================================== 00:14:09.639 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:09.639 ===================================================== 00:14:09.639 Controller Capabilities/Features 00:14:09.639 ================================ 00:14:09.639 Vendor ID: 4e58 00:14:09.639 Subsystem Vendor ID: 4e58 00:14:09.639 Serial Number: SPDK1 00:14:09.639 Model Number: SPDK bdev Controller 00:14:09.639 Firmware Version: 25.01 00:14:09.639 Recommended Arb Burst: 6 00:14:09.639 IEEE OUI Identifier: 8d 6b 50 00:14:09.639 Multi-path I/O 00:14:09.639 May have multiple subsystem ports: Yes 00:14:09.639 May have multiple controllers: Yes 00:14:09.639 Associated with SR-IOV VF: No 00:14:09.639 Max Data Transfer Size: 131072 00:14:09.639 Max Number of Namespaces: 32 00:14:09.639 Max Number of I/O Queues: 127 00:14:09.639 NVMe Specification Version (VS): 1.3 00:14:09.639 NVMe Specification Version (Identify): 1.3 00:14:09.639 Maximum Queue Entries: 256 00:14:09.639 Contiguous Queues Required: Yes 00:14:09.639 Arbitration Mechanisms Supported 00:14:09.639 Weighted Round Robin: Not Supported 00:14:09.639 Vendor Specific: Not Supported 00:14:09.639 Reset Timeout: 15000 ms 00:14:09.639 Doorbell Stride: 4 bytes 00:14:09.639 NVM Subsystem Reset: Not Supported 00:14:09.639 Command Sets Supported 00:14:09.639 NVM Command Set: Supported 00:14:09.639 Boot Partition: Not Supported 00:14:09.639 Memory Page Size Minimum: 4096 bytes 00:14:09.640 Memory Page Size Maximum: 4096 bytes 00:14:09.640 Persistent Memory Region: Not Supported 00:14:09.640 Optional Asynchronous Events Supported 00:14:09.640 Namespace Attribute Notices: Supported 00:14:09.640 Firmware Activation Notices: Not Supported 00:14:09.640 ANA Change Notices: Not Supported 00:14:09.640 PLE Aggregate Log Change Notices: Not Supported 00:14:09.640 LBA Status Info Alert Notices: Not Supported 00:14:09.640 EGE Aggregate Log Change Notices: Not Supported 00:14:09.640 Normal NVM Subsystem Shutdown event: Not Supported 00:14:09.640 Zone Descriptor Change Notices: Not Supported 00:14:09.640 Discovery Log Change Notices: Not Supported 00:14:09.640 Controller Attributes 00:14:09.640 128-bit Host Identifier: Supported 00:14:09.640 Non-Operational Permissive Mode: Not Supported 00:14:09.640 NVM Sets: Not Supported 00:14:09.640 Read Recovery Levels: Not Supported 00:14:09.640 Endurance Groups: Not Supported 00:14:09.640 Predictable Latency Mode: Not Supported 00:14:09.640 Traffic Based Keep ALive: Not Supported 00:14:09.640 Namespace Granularity: Not Supported 00:14:09.640 SQ Associations: Not Supported 00:14:09.640 UUID List: Not Supported 00:14:09.640 Multi-Domain Subsystem: Not Supported 00:14:09.640 Fixed Capacity Management: Not Supported 00:14:09.640 Variable Capacity Management: Not Supported 00:14:09.640 Delete Endurance Group: Not Supported 00:14:09.640 Delete NVM Set: Not Supported 00:14:09.640 Extended LBA Formats Supported: Not Supported 00:14:09.640 Flexible Data Placement Supported: Not Supported 00:14:09.640 00:14:09.640 Controller Memory Buffer Support 00:14:09.640 ================================ 00:14:09.640 Supported: No 00:14:09.640 00:14:09.640 Persistent Memory Region Support 00:14:09.640 ================================ 00:14:09.640 Supported: No 00:14:09.640 00:14:09.640 Admin Command Set Attributes 00:14:09.640 ============================ 00:14:09.640 Security Send/Receive: Not Supported 00:14:09.640 Format NVM: Not Supported 00:14:09.640 Firmware Activate/Download: Not Supported 00:14:09.640 Namespace Management: Not Supported 00:14:09.640 Device Self-Test: Not Supported 00:14:09.640 Directives: Not Supported 00:14:09.640 NVMe-MI: Not Supported 00:14:09.640 Virtualization Management: Not Supported 00:14:09.640 Doorbell Buffer Config: Not Supported 00:14:09.640 Get LBA Status Capability: Not Supported 00:14:09.640 Command & Feature Lockdown Capability: Not Supported 00:14:09.640 Abort Command Limit: 4 00:14:09.640 Async Event Request Limit: 4 00:14:09.640 Number of Firmware Slots: N/A 00:14:09.640 Firmware Slot 1 Read-Only: N/A 00:14:09.640 Firmware Activation Without Reset: N/A 00:14:09.640 Multiple Update Detection Support: N/A 00:14:09.640 Firmware Update Granularity: No Information Provided 00:14:09.640 Per-Namespace SMART Log: No 00:14:09.640 Asymmetric Namespace Access Log Page: Not Supported 00:14:09.640 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:09.640 Command Effects Log Page: Supported 00:14:09.640 Get Log Page Extended Data: Supported 00:14:09.640 Telemetry Log Pages: Not Supported 00:14:09.640 Persistent Event Log Pages: Not Supported 00:14:09.640 Supported Log Pages Log Page: May Support 00:14:09.640 Commands Supported & Effects Log Page: Not Supported 00:14:09.640 Feature Identifiers & Effects Log Page:May Support 00:14:09.640 NVMe-MI Commands & Effects Log Page: May Support 00:14:09.640 Data Area 4 for Telemetry Log: Not Supported 00:14:09.640 Error Log Page Entries Supported: 128 00:14:09.640 Keep Alive: Supported 00:14:09.640 Keep Alive Granularity: 10000 ms 00:14:09.640 00:14:09.640 NVM Command Set Attributes 00:14:09.640 ========================== 00:14:09.640 Submission Queue Entry Size 00:14:09.640 Max: 64 00:14:09.640 Min: 64 00:14:09.640 Completion Queue Entry Size 00:14:09.640 Max: 16 00:14:09.640 Min: 16 00:14:09.640 Number of Namespaces: 32 00:14:09.640 Compare Command: Supported 00:14:09.640 Write Uncorrectable Command: Not Supported 00:14:09.640 Dataset Management Command: Supported 00:14:09.640 Write Zeroes Command: Supported 00:14:09.640 Set Features Save Field: Not Supported 00:14:09.640 Reservations: Not Supported 00:14:09.640 Timestamp: Not Supported 00:14:09.640 Copy: Supported 00:14:09.640 Volatile Write Cache: Present 00:14:09.640 Atomic Write Unit (Normal): 1 00:14:09.640 Atomic Write Unit (PFail): 1 00:14:09.640 Atomic Compare & Write Unit: 1 00:14:09.640 Fused Compare & Write: Supported 00:14:09.640 Scatter-Gather List 00:14:09.640 SGL Command Set: Supported (Dword aligned) 00:14:09.640 SGL Keyed: Not Supported 00:14:09.640 SGL Bit Bucket Descriptor: Not Supported 00:14:09.640 SGL Metadata Pointer: Not Supported 00:14:09.640 Oversized SGL: Not Supported 00:14:09.640 SGL Metadata Address: Not Supported 00:14:09.640 SGL Offset: Not Supported 00:14:09.640 Transport SGL Data Block: Not Supported 00:14:09.640 Replay Protected Memory Block: Not Supported 00:14:09.640 00:14:09.640 Firmware Slot Information 00:14:09.640 ========================= 00:14:09.640 Active slot: 1 00:14:09.640 Slot 1 Firmware Revision: 25.01 00:14:09.640 00:14:09.640 00:14:09.640 Commands Supported and Effects 00:14:09.640 ============================== 00:14:09.640 Admin Commands 00:14:09.640 -------------- 00:14:09.640 Get Log Page (02h): Supported 00:14:09.640 Identify (06h): Supported 00:14:09.640 Abort (08h): Supported 00:14:09.640 Set Features (09h): Supported 00:14:09.640 Get Features (0Ah): Supported 00:14:09.640 Asynchronous Event Request (0Ch): Supported 00:14:09.640 Keep Alive (18h): Supported 00:14:09.640 I/O Commands 00:14:09.640 ------------ 00:14:09.640 Flush (00h): Supported LBA-Change 00:14:09.640 Write (01h): Supported LBA-Change 00:14:09.640 Read (02h): Supported 00:14:09.640 Compare (05h): Supported 00:14:09.640 Write Zeroes (08h): Supported LBA-Change 00:14:09.640 Dataset Management (09h): Supported LBA-Change 00:14:09.640 Copy (19h): Supported LBA-Change 00:14:09.640 00:14:09.640 Error Log 00:14:09.640 ========= 00:14:09.640 00:14:09.640 Arbitration 00:14:09.640 =========== 00:14:09.640 Arbitration Burst: 1 00:14:09.640 00:14:09.640 Power Management 00:14:09.640 ================ 00:14:09.640 Number of Power States: 1 00:14:09.640 Current Power State: Power State #0 00:14:09.640 Power State #0: 00:14:09.640 Max Power: 0.00 W 00:14:09.640 Non-Operational State: Operational 00:14:09.640 Entry Latency: Not Reported 00:14:09.640 Exit Latency: Not Reported 00:14:09.640 Relative Read Throughput: 0 00:14:09.640 Relative Read Latency: 0 00:14:09.640 Relative Write Throughput: 0 00:14:09.640 Relative Write Latency: 0 00:14:09.640 Idle Power: Not Reported 00:14:09.640 Active Power: Not Reported 00:14:09.640 Non-Operational Permissive Mode: Not Supported 00:14:09.640 00:14:09.640 Health Information 00:14:09.640 ================== 00:14:09.640 Critical Warnings: 00:14:09.640 Available Spare Space: OK 00:14:09.640 Temperature: OK 00:14:09.640 Device Reliability: OK 00:14:09.640 Read Only: No 00:14:09.640 Volatile Memory Backup: OK 00:14:09.640 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:09.640 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:09.640 Available Spare: 0% 00:14:09.640 Available Sp[2024-11-20 10:31:50.343593] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:09.640 [2024-11-20 10:31:50.343603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:09.640 [2024-11-20 10:31:50.343629] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] Prepare to destruct SSD 00:14:09.640 [2024-11-20 10:31:50.343639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.640 [2024-11-20 10:31:50.343644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.640 [2024-11-20 10:31:50.343650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.640 [2024-11-20 10:31:50.343655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:09.640 [2024-11-20 10:31:50.347209] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:09.640 [2024-11-20 10:31:50.347220] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:09.640 [2024-11-20 10:31:50.347783] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:09.640 [2024-11-20 10:31:50.347832] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] RTD3E = 0 us 00:14:09.640 [2024-11-20 10:31:50.347838] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown timeout = 10000 ms 00:14:09.640 [2024-11-20 10:31:50.348780] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:09.641 [2024-11-20 10:31:50.348791] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1, 0] shutdown complete in 0 milliseconds 00:14:09.641 [2024-11-20 10:31:50.348842] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:09.641 [2024-11-20 10:31:50.349810] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:09.899 are Threshold: 0% 00:14:09.899 Life Percentage Used: 0% 00:14:09.899 Data Units Read: 0 00:14:09.899 Data Units Written: 0 00:14:09.899 Host Read Commands: 0 00:14:09.899 Host Write Commands: 0 00:14:09.899 Controller Busy Time: 0 minutes 00:14:09.899 Power Cycles: 0 00:14:09.899 Power On Hours: 0 hours 00:14:09.899 Unsafe Shutdowns: 0 00:14:09.899 Unrecoverable Media Errors: 0 00:14:09.899 Lifetime Error Log Entries: 0 00:14:09.899 Warning Temperature Time: 0 minutes 00:14:09.899 Critical Temperature Time: 0 minutes 00:14:09.899 00:14:09.899 Number of Queues 00:14:09.899 ================ 00:14:09.899 Number of I/O Submission Queues: 127 00:14:09.899 Number of I/O Completion Queues: 127 00:14:09.899 00:14:09.899 Active Namespaces 00:14:09.899 ================= 00:14:09.899 Namespace ID:1 00:14:09.899 Error Recovery Timeout: Unlimited 00:14:09.899 Command Set Identifier: NVM (00h) 00:14:09.899 Deallocate: Supported 00:14:09.899 Deallocated/Unwritten Error: Not Supported 00:14:09.900 Deallocated Read Value: Unknown 00:14:09.900 Deallocate in Write Zeroes: Not Supported 00:14:09.900 Deallocated Guard Field: 0xFFFF 00:14:09.900 Flush: Supported 00:14:09.900 Reservation: Supported 00:14:09.900 Namespace Sharing Capabilities: Multiple Controllers 00:14:09.900 Size (in LBAs): 131072 (0GiB) 00:14:09.900 Capacity (in LBAs): 131072 (0GiB) 00:14:09.900 Utilization (in LBAs): 131072 (0GiB) 00:14:09.900 NGUID: 432B9508B9A8491DA4CAC917F1F4A5E0 00:14:09.900 UUID: 432b9508-b9a8-491d-a4ca-c917f1f4a5e0 00:14:09.900 Thin Provisioning: Not Supported 00:14:09.900 Per-NS Atomic Units: Yes 00:14:09.900 Atomic Boundary Size (Normal): 0 00:14:09.900 Atomic Boundary Size (PFail): 0 00:14:09.900 Atomic Boundary Offset: 0 00:14:09.900 Maximum Single Source Range Length: 65535 00:14:09.900 Maximum Copy Length: 65535 00:14:09.900 Maximum Source Range Count: 1 00:14:09.900 NGUID/EUI64 Never Reused: No 00:14:09.900 Namespace Write Protected: No 00:14:09.900 Number of LBA Formats: 1 00:14:09.900 Current LBA Format: LBA Format #00 00:14:09.900 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:09.900 00:14:09.900 10:31:50 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:09.900 [2024-11-20 10:31:50.576058] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:15.169 Initializing NVMe Controllers 00:14:15.169 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:15.169 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:15.169 Initialization complete. Launching workers. 00:14:15.169 ======================================================== 00:14:15.169 Latency(us) 00:14:15.169 Device Information : IOPS MiB/s Average min max 00:14:15.169 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39943.60 156.03 3204.30 930.14 8643.21 00:14:15.169 ======================================================== 00:14:15.169 Total : 39943.60 156.03 3204.30 930.14 8643.21 00:14:15.169 00:14:15.169 [2024-11-20 10:31:55.594685] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:15.169 10:31:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:15.169 [2024-11-20 10:31:55.831760] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:20.440 Initializing NVMe Controllers 00:14:20.441 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:20.441 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:20.441 Initialization complete. Launching workers. 00:14:20.441 ======================================================== 00:14:20.441 Latency(us) 00:14:20.441 Device Information : IOPS MiB/s Average min max 00:14:20.441 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 15999.16 62.50 8011.53 4987.19 15964.96 00:14:20.441 ======================================================== 00:14:20.441 Total : 15999.16 62.50 8011.53 4987.19 15964.96 00:14:20.441 00:14:20.441 [2024-11-20 10:32:00.872673] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:20.441 10:32:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:20.441 [2024-11-20 10:32:01.088673] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:25.710 [2024-11-20 10:32:06.149427] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:25.710 Initializing NVMe Controllers 00:14:25.710 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:25.710 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:25.710 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:14:25.710 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:14:25.710 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:14:25.710 Initialization complete. Launching workers. 00:14:25.710 Starting thread on core 2 00:14:25.710 Starting thread on core 3 00:14:25.710 Starting thread on core 1 00:14:25.711 10:32:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:14:25.969 [2024-11-20 10:32:06.449605] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:29.258 [2024-11-20 10:32:09.511592] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:29.258 Initializing NVMe Controllers 00:14:29.258 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:29.258 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:29.258 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:14:29.258 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:14:29.258 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:14:29.258 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:14:29.258 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:29.258 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:29.258 Initialization complete. Launching workers. 00:14:29.258 Starting thread on core 1 with urgent priority queue 00:14:29.258 Starting thread on core 2 with urgent priority queue 00:14:29.258 Starting thread on core 3 with urgent priority queue 00:14:29.258 Starting thread on core 0 with urgent priority queue 00:14:29.258 SPDK bdev Controller (SPDK1 ) core 0: 6607.33 IO/s 15.13 secs/100000 ios 00:14:29.258 SPDK bdev Controller (SPDK1 ) core 1: 6676.67 IO/s 14.98 secs/100000 ios 00:14:29.258 SPDK bdev Controller (SPDK1 ) core 2: 6319.33 IO/s 15.82 secs/100000 ios 00:14:29.258 SPDK bdev Controller (SPDK1 ) core 3: 6614.67 IO/s 15.12 secs/100000 ios 00:14:29.258 ======================================================== 00:14:29.258 00:14:29.258 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:29.258 [2024-11-20 10:32:09.799677] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:29.258 Initializing NVMe Controllers 00:14:29.258 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:29.258 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:29.258 Namespace ID: 1 size: 0GB 00:14:29.258 Initialization complete. 00:14:29.258 INFO: using host memory buffer for IO 00:14:29.258 Hello world! 00:14:29.258 [2024-11-20 10:32:09.833925] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:29.258 10:32:09 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:14:29.517 [2024-11-20 10:32:10.120638] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:30.452 Initializing NVMe Controllers 00:14:30.452 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:30.452 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:30.452 Initialization complete. Launching workers. 00:14:30.452 submit (in ns) avg, min, max = 6277.1, 3138.1, 3998550.5 00:14:30.452 complete (in ns) avg, min, max = 21580.3, 1717.1, 6988137.1 00:14:30.452 00:14:30.452 Submit histogram 00:14:30.452 ================ 00:14:30.452 Range in us Cumulative Count 00:14:30.452 3.124 - 3.139: 0.0059% ( 1) 00:14:30.452 3.139 - 3.154: 0.0238% ( 3) 00:14:30.452 3.154 - 3.170: 0.0713% ( 8) 00:14:30.452 3.170 - 3.185: 0.0832% ( 2) 00:14:30.452 3.185 - 3.200: 0.2198% ( 23) 00:14:30.452 3.200 - 3.215: 1.4023% ( 199) 00:14:30.452 3.215 - 3.230: 5.1872% ( 637) 00:14:30.452 3.230 - 3.246: 11.0933% ( 994) 00:14:30.452 3.246 - 3.261: 17.2074% ( 1029) 00:14:30.452 3.261 - 3.276: 24.1533% ( 1169) 00:14:30.452 3.276 - 3.291: 31.1408% ( 1176) 00:14:30.452 3.291 - 3.307: 37.0351% ( 992) 00:14:30.452 3.307 - 3.322: 43.2442% ( 1045) 00:14:30.452 3.322 - 3.337: 48.5859% ( 899) 00:14:30.452 3.337 - 3.352: 53.7493% ( 869) 00:14:30.452 3.352 - 3.368: 59.4831% ( 965) 00:14:30.452 3.368 - 3.383: 68.1402% ( 1457) 00:14:30.452 3.383 - 3.398: 74.0820% ( 1000) 00:14:30.452 3.398 - 3.413: 79.1028% ( 845) 00:14:30.452 3.413 - 3.429: 82.7689% ( 617) 00:14:30.452 3.429 - 3.444: 85.2466% ( 417) 00:14:30.452 3.444 - 3.459: 86.9222% ( 282) 00:14:30.452 3.459 - 3.474: 87.6649% ( 125) 00:14:30.452 3.474 - 3.490: 88.1224% ( 77) 00:14:30.452 3.490 - 3.505: 88.4017% ( 47) 00:14:30.452 3.505 - 3.520: 88.8532% ( 76) 00:14:30.452 3.520 - 3.535: 89.6791% ( 139) 00:14:30.452 3.535 - 3.550: 90.4991% ( 138) 00:14:30.452 3.550 - 3.566: 91.5924% ( 184) 00:14:30.452 3.566 - 3.581: 92.5906% ( 168) 00:14:30.452 3.581 - 3.596: 93.5175% ( 156) 00:14:30.452 3.596 - 3.611: 94.5336% ( 171) 00:14:30.452 3.611 - 3.627: 95.3892% ( 144) 00:14:30.452 3.627 - 3.642: 96.2923% ( 152) 00:14:30.452 3.642 - 3.657: 97.1242% ( 140) 00:14:30.452 3.657 - 3.672: 97.8491% ( 122) 00:14:30.452 3.672 - 3.688: 98.3660% ( 87) 00:14:30.452 3.688 - 3.703: 98.6809% ( 53) 00:14:30.452 3.703 - 3.718: 99.0137% ( 56) 00:14:30.452 3.718 - 3.733: 99.2216% ( 35) 00:14:30.452 3.733 - 3.749: 99.3880% ( 28) 00:14:30.452 3.749 - 3.764: 99.4890% ( 17) 00:14:30.452 3.764 - 3.779: 99.6138% ( 21) 00:14:30.452 3.779 - 3.794: 99.6435% ( 5) 00:14:30.452 3.794 - 3.810: 99.6554% ( 2) 00:14:30.452 3.810 - 3.825: 99.6732% ( 3) 00:14:30.452 4.023 - 4.053: 99.6791% ( 1) 00:14:30.452 4.968 - 4.998: 99.6851% ( 1) 00:14:30.452 5.029 - 5.059: 99.6910% ( 1) 00:14:30.452 5.150 - 5.181: 99.6970% ( 1) 00:14:30.452 5.425 - 5.455: 99.7029% ( 1) 00:14:30.452 5.455 - 5.486: 99.7148% ( 2) 00:14:30.452 5.486 - 5.516: 99.7267% ( 2) 00:14:30.452 5.547 - 5.577: 99.7326% ( 1) 00:14:30.452 5.577 - 5.608: 99.7386% ( 1) 00:14:30.452 5.608 - 5.638: 99.7504% ( 2) 00:14:30.452 5.638 - 5.669: 99.7802% ( 5) 00:14:30.452 5.790 - 5.821: 99.7920% ( 2) 00:14:30.452 5.882 - 5.912: 99.8039% ( 2) 00:14:30.452 6.065 - 6.095: 99.8099% ( 1) 00:14:30.452 6.156 - 6.187: 99.8217% ( 2) 00:14:30.452 6.187 - 6.217: 99.8336% ( 2) 00:14:30.452 6.339 - 6.370: 99.8396% ( 1) 00:14:30.452 6.400 - 6.430: 99.8455% ( 1) 00:14:30.452 6.491 - 6.522: 99.8515% ( 1) 00:14:30.452 6.613 - 6.644: 99.8574% ( 1) 00:14:30.452 6.644 - 6.674: 99.8633% ( 1) 00:14:30.452 6.674 - 6.705: 99.8693% ( 1) 00:14:30.452 6.827 - 6.857: 99.8752% ( 1) 00:14:30.452 6.888 - 6.918: 99.8812% ( 1) 00:14:30.452 7.406 - 7.436: 99.8871% ( 1) 00:14:30.452 7.436 - 7.467: 99.8930% ( 1) 00:14:30.452 7.619 - 7.650: 99.8990% ( 1) 00:14:30.452 8.290 - 8.350: 99.9049% ( 1) 00:14:30.452 8.350 - 8.411: 99.9109% ( 1) 00:14:30.452 13.836 - 13.897: 99.9168% ( 1) 00:14:30.452 18.773 - 18.895: 99.9228% ( 1) 00:14:30.452 1053.257 - 1061.059: 99.9287% ( 1) 00:14:30.452 3994.575 - 4025.783: 100.0000% ( 12) 00:14:30.452 00:14:30.452 [2024-11-20 10:32:11.142426] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:30.452 Complete histogram 00:14:30.452 ================== 00:14:30.452 Range in us Cumulative Count 00:14:30.452 1.714 - 1.722: 0.0297% ( 5) 00:14:30.452 1.722 - 1.730: 0.0713% ( 7) 00:14:30.452 1.730 - 1.737: 0.1129% ( 7) 00:14:30.452 1.737 - 1.745: 0.1485% ( 6) 00:14:30.452 1.745 - 1.752: 0.1604% ( 2) 00:14:30.452 1.752 - 1.760: 0.2258% ( 11) 00:14:30.452 1.760 - 1.768: 0.7249% ( 84) 00:14:30.452 1.768 - 1.775: 5.4248% ( 791) 00:14:30.452 1.775 - 1.783: 14.0939% ( 1459) 00:14:30.452 1.783 - 1.790: 19.5484% ( 918) 00:14:30.452 1.790 - 1.798: 21.4201% ( 315) 00:14:30.452 1.798 - 1.806: 22.6560% ( 208) 00:14:30.452 1.806 - 1.813: 23.9335% ( 215) 00:14:30.453 1.813 - 1.821: 26.3933% ( 414) 00:14:30.453 1.821 - 1.829: 40.1307% ( 2312) 00:14:30.453 1.829 - 1.836: 66.3815% ( 4418) 00:14:30.453 1.836 - 1.844: 82.8342% ( 2769) 00:14:30.453 1.844 - 1.851: 88.8116% ( 1006) 00:14:30.453 1.851 - 1.859: 91.9192% ( 523) 00:14:30.453 1.859 - 1.867: 93.9988% ( 350) 00:14:30.453 1.867 - 1.874: 94.9138% ( 154) 00:14:30.453 1.874 - 1.882: 95.2109% ( 50) 00:14:30.453 1.882 - 1.890: 95.5496% ( 57) 00:14:30.453 1.890 - 1.897: 96.2448% ( 117) 00:14:30.453 1.897 - 1.905: 97.3024% ( 178) 00:14:30.453 1.905 - 1.912: 98.2650% ( 162) 00:14:30.453 1.912 - 1.920: 98.8770% ( 103) 00:14:30.453 1.920 - 1.928: 99.1682% ( 49) 00:14:30.453 1.928 - 1.935: 99.2870% ( 20) 00:14:30.453 1.935 - 1.943: 99.3642% ( 13) 00:14:30.453 1.943 - 1.950: 99.3821% ( 3) 00:14:30.453 2.011 - 2.027: 99.3880% ( 1) 00:14:30.453 3.368 - 3.383: 99.3939% ( 1) 00:14:30.453 3.703 - 3.718: 99.3999% ( 1) 00:14:30.453 3.931 - 3.962: 99.4058% ( 1) 00:14:30.453 4.084 - 4.114: 99.4118% ( 1) 00:14:30.453 4.145 - 4.175: 99.4177% ( 1) 00:14:30.453 4.175 - 4.206: 99.4236% ( 1) 00:14:30.453 4.206 - 4.236: 99.4355% ( 2) 00:14:30.453 4.236 - 4.267: 99.4474% ( 2) 00:14:30.453 4.358 - 4.389: 99.4534% ( 1) 00:14:30.453 4.419 - 4.450: 99.4593% ( 1) 00:14:30.453 4.450 - 4.480: 99.4652% ( 1) 00:14:30.453 4.571 - 4.602: 99.4712% ( 1) 00:14:30.453 4.693 - 4.724: 99.4771% ( 1) 00:14:30.453 4.754 - 4.785: 99.4831% ( 1) 00:14:30.453 4.876 - 4.907: 99.4890% ( 1) 00:14:30.453 5.577 - 5.608: 99.4949% ( 1) 00:14:30.453 5.943 - 5.973: 99.5009% ( 1) 00:14:30.453 6.979 - 7.010: 99.5068% ( 1) 00:14:30.453 38.522 - 38.766: 99.5128% ( 1) 00:14:30.453 3869.745 - 3885.349: 99.5187% ( 1) 00:14:30.453 3994.575 - 4025.783: 99.9881% ( 79) 00:14:30.453 5960.655 - 5991.863: 99.9941% ( 1) 00:14:30.453 6959.299 - 6990.507: 100.0000% ( 1) 00:14:30.453 00:14:30.711 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:14:30.711 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:30.711 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:14:30.711 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:14:30.711 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:30.711 [ 00:14:30.711 { 00:14:30.711 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:30.711 "subtype": "Discovery", 00:14:30.711 "listen_addresses": [], 00:14:30.711 "allow_any_host": true, 00:14:30.711 "hosts": [] 00:14:30.711 }, 00:14:30.711 { 00:14:30.711 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:30.711 "subtype": "NVMe", 00:14:30.711 "listen_addresses": [ 00:14:30.711 { 00:14:30.711 "trtype": "VFIOUSER", 00:14:30.711 "adrfam": "IPv4", 00:14:30.711 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:30.711 "trsvcid": "0" 00:14:30.711 } 00:14:30.711 ], 00:14:30.711 "allow_any_host": true, 00:14:30.711 "hosts": [], 00:14:30.711 "serial_number": "SPDK1", 00:14:30.711 "model_number": "SPDK bdev Controller", 00:14:30.711 "max_namespaces": 32, 00:14:30.711 "min_cntlid": 1, 00:14:30.711 "max_cntlid": 65519, 00:14:30.711 "namespaces": [ 00:14:30.711 { 00:14:30.711 "nsid": 1, 00:14:30.711 "bdev_name": "Malloc1", 00:14:30.711 "name": "Malloc1", 00:14:30.711 "nguid": "432B9508B9A8491DA4CAC917F1F4A5E0", 00:14:30.711 "uuid": "432b9508-b9a8-491d-a4ca-c917f1f4a5e0" 00:14:30.711 } 00:14:30.711 ] 00:14:30.711 }, 00:14:30.711 { 00:14:30.711 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:30.711 "subtype": "NVMe", 00:14:30.711 "listen_addresses": [ 00:14:30.711 { 00:14:30.711 "trtype": "VFIOUSER", 00:14:30.711 "adrfam": "IPv4", 00:14:30.711 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:30.712 "trsvcid": "0" 00:14:30.712 } 00:14:30.712 ], 00:14:30.712 "allow_any_host": true, 00:14:30.712 "hosts": [], 00:14:30.712 "serial_number": "SPDK2", 00:14:30.712 "model_number": "SPDK bdev Controller", 00:14:30.712 "max_namespaces": 32, 00:14:30.712 "min_cntlid": 1, 00:14:30.712 "max_cntlid": 65519, 00:14:30.712 "namespaces": [ 00:14:30.712 { 00:14:30.712 "nsid": 1, 00:14:30.712 "bdev_name": "Malloc2", 00:14:30.712 "name": "Malloc2", 00:14:30.712 "nguid": "B9E56A33AA254C99973399D9D4CAEC3A", 00:14:30.712 "uuid": "b9e56a33-aa25-4c99-9733-99d9d4caec3a" 00:14:30.712 } 00:14:30.712 ] 00:14:30.712 } 00:14:30.712 ] 00:14:30.712 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:30.712 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3191737 00:14:30.712 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:30.712 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:14:30.712 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:30.712 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:30.712 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:30.712 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:30.712 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:30.712 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:14:30.970 [2024-11-20 10:32:11.557604] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:30.970 Malloc3 00:14:30.970 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:14:31.228 [2024-11-20 10:32:11.792361] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:31.228 10:32:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:31.228 Asynchronous Event Request test 00:14:31.228 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:14:31.228 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:14:31.228 Registering asynchronous event callbacks... 00:14:31.228 Starting namespace attribute notice tests for all controllers... 00:14:31.228 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:31.228 aer_cb - Changed Namespace 00:14:31.228 Cleaning up... 00:14:31.487 [ 00:14:31.487 { 00:14:31.487 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:31.487 "subtype": "Discovery", 00:14:31.487 "listen_addresses": [], 00:14:31.487 "allow_any_host": true, 00:14:31.487 "hosts": [] 00:14:31.487 }, 00:14:31.487 { 00:14:31.487 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:31.487 "subtype": "NVMe", 00:14:31.487 "listen_addresses": [ 00:14:31.487 { 00:14:31.487 "trtype": "VFIOUSER", 00:14:31.487 "adrfam": "IPv4", 00:14:31.487 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:31.487 "trsvcid": "0" 00:14:31.487 } 00:14:31.487 ], 00:14:31.488 "allow_any_host": true, 00:14:31.488 "hosts": [], 00:14:31.488 "serial_number": "SPDK1", 00:14:31.488 "model_number": "SPDK bdev Controller", 00:14:31.488 "max_namespaces": 32, 00:14:31.488 "min_cntlid": 1, 00:14:31.488 "max_cntlid": 65519, 00:14:31.488 "namespaces": [ 00:14:31.488 { 00:14:31.488 "nsid": 1, 00:14:31.488 "bdev_name": "Malloc1", 00:14:31.488 "name": "Malloc1", 00:14:31.488 "nguid": "432B9508B9A8491DA4CAC917F1F4A5E0", 00:14:31.488 "uuid": "432b9508-b9a8-491d-a4ca-c917f1f4a5e0" 00:14:31.488 }, 00:14:31.488 { 00:14:31.488 "nsid": 2, 00:14:31.488 "bdev_name": "Malloc3", 00:14:31.488 "name": "Malloc3", 00:14:31.488 "nguid": "19FCFD5B822740DFB88C39D71EF1A574", 00:14:31.488 "uuid": "19fcfd5b-8227-40df-b88c-39d71ef1a574" 00:14:31.488 } 00:14:31.488 ] 00:14:31.488 }, 00:14:31.488 { 00:14:31.488 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:31.488 "subtype": "NVMe", 00:14:31.488 "listen_addresses": [ 00:14:31.488 { 00:14:31.488 "trtype": "VFIOUSER", 00:14:31.488 "adrfam": "IPv4", 00:14:31.488 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:31.488 "trsvcid": "0" 00:14:31.488 } 00:14:31.488 ], 00:14:31.488 "allow_any_host": true, 00:14:31.488 "hosts": [], 00:14:31.488 "serial_number": "SPDK2", 00:14:31.488 "model_number": "SPDK bdev Controller", 00:14:31.488 "max_namespaces": 32, 00:14:31.488 "min_cntlid": 1, 00:14:31.488 "max_cntlid": 65519, 00:14:31.488 "namespaces": [ 00:14:31.488 { 00:14:31.488 "nsid": 1, 00:14:31.488 "bdev_name": "Malloc2", 00:14:31.488 "name": "Malloc2", 00:14:31.488 "nguid": "B9E56A33AA254C99973399D9D4CAEC3A", 00:14:31.488 "uuid": "b9e56a33-aa25-4c99-9733-99d9d4caec3a" 00:14:31.488 } 00:14:31.488 ] 00:14:31.488 } 00:14:31.488 ] 00:14:31.488 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3191737 00:14:31.488 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:31.488 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:31.488 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:14:31.488 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:31.488 [2024-11-20 10:32:12.033834] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:14:31.488 [2024-11-20 10:32:12.033863] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3191782 ] 00:14:31.488 [2024-11-20 10:32:12.071552] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:14:31.488 [2024-11-20 10:32:12.080434] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:31.488 [2024-11-20 10:32:12.080457] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f5687878000 00:14:31.488 [2024-11-20 10:32:12.081433] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:31.488 [2024-11-20 10:32:12.082445] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:31.488 [2024-11-20 10:32:12.083455] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:31.488 [2024-11-20 10:32:12.084469] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:31.488 [2024-11-20 10:32:12.085478] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:31.488 [2024-11-20 10:32:12.086485] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:31.488 [2024-11-20 10:32:12.087491] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:31.488 [2024-11-20 10:32:12.088493] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:31.488 [2024-11-20 10:32:12.089501] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:31.488 [2024-11-20 10:32:12.089510] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f568786d000 00:14:31.488 [2024-11-20 10:32:12.090422] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:31.488 [2024-11-20 10:32:12.103781] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:14:31.488 [2024-11-20 10:32:12.103804] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to connect adminq (no timeout) 00:14:31.488 [2024-11-20 10:32:12.105871] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:31.488 [2024-11-20 10:32:12.105910] nvme_pcie_common.c: 159:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:31.488 [2024-11-20 10:32:12.105977] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for connect adminq (no timeout) 00:14:31.488 [2024-11-20 10:32:12.105989] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs (no timeout) 00:14:31.488 [2024-11-20 10:32:12.105994] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read vs wait for vs (no timeout) 00:14:31.488 [2024-11-20 10:32:12.106877] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:14:31.488 [2024-11-20 10:32:12.106887] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap (no timeout) 00:14:31.488 [2024-11-20 10:32:12.106896] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to read cap wait for cap (no timeout) 00:14:31.488 [2024-11-20 10:32:12.107885] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:14:31.488 [2024-11-20 10:32:12.107894] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en (no timeout) 00:14:31.488 [2024-11-20 10:32:12.107901] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to check en wait for cc (timeout 15000 ms) 00:14:31.488 [2024-11-20 10:32:12.108886] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:14:31.488 [2024-11-20 10:32:12.108895] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:31.488 [2024-11-20 10:32:12.109891] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:14:31.488 [2024-11-20 10:32:12.109900] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 0 && CSTS.RDY = 0 00:14:31.488 [2024-11-20 10:32:12.109904] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to controller is disabled (timeout 15000 ms) 00:14:31.488 [2024-11-20 10:32:12.109910] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:31.488 [2024-11-20 10:32:12.110018] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Setting CC.EN = 1 00:14:31.488 [2024-11-20 10:32:12.110022] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:31.488 [2024-11-20 10:32:12.110026] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:14:31.488 [2024-11-20 10:32:12.110897] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:14:31.488 [2024-11-20 10:32:12.111902] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:14:31.488 [2024-11-20 10:32:12.112906] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:31.488 [2024-11-20 10:32:12.113906] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:31.488 [2024-11-20 10:32:12.113942] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:31.488 [2024-11-20 10:32:12.114917] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:14:31.488 [2024-11-20 10:32:12.114926] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:31.488 [2024-11-20 10:32:12.114931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to reset admin queue (timeout 30000 ms) 00:14:31.488 [2024-11-20 10:32:12.114948] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller (no timeout) 00:14:31.488 [2024-11-20 10:32:12.114955] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify controller (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.114966] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:31.489 [2024-11-20 10:32:12.114972] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:31.489 [2024-11-20 10:32:12.114976] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:31.489 [2024-11-20 10:32:12.114987] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:31.489 [2024-11-20 10:32:12.125210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:31.489 [2024-11-20 10:32:12.125221] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_xfer_size 131072 00:14:31.489 [2024-11-20 10:32:12.125225] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] MDTS max_xfer_size 131072 00:14:31.489 [2024-11-20 10:32:12.125230] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] CNTLID 0x0001 00:14:31.489 [2024-11-20 10:32:12.125235] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:31.489 [2024-11-20 10:32:12.125241] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] transport max_sges 1 00:14:31.489 [2024-11-20 10:32:12.125246] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] fuses compare and write: 1 00:14:31.489 [2024-11-20 10:32:12.125250] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to configure AER (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.125258] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for configure aer (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.125268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:31.489 [2024-11-20 10:32:12.133207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:31.489 [2024-11-20 10:32:12.133218] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.489 [2024-11-20 10:32:12.133226] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.489 [2024-11-20 10:32:12.133233] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.489 [2024-11-20 10:32:12.133241] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:31.489 [2024-11-20 10:32:12.133245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set keep alive timeout (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.133251] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.133259] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:31.489 [2024-11-20 10:32:12.141206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:31.489 [2024-11-20 10:32:12.141216] nvme_ctrlr.c:3047:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Controller adjusted keep alive timeout to 0 ms 00:14:31.489 [2024-11-20 10:32:12.141222] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.141227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set number of queues (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.141235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for set number of queues (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.141243] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:31.489 [2024-11-20 10:32:12.149207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:31.489 [2024-11-20 10:32:12.149262] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify active ns (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.149270] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify active ns (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.149277] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:31.489 [2024-11-20 10:32:12.149281] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:31.489 [2024-11-20 10:32:12.149284] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:31.489 [2024-11-20 10:32:12.149290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:31.489 [2024-11-20 10:32:12.157206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:31.489 [2024-11-20 10:32:12.157217] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Namespace 1 was added 00:14:31.489 [2024-11-20 10:32:12.157228] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.157235] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify ns (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.157241] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:31.489 [2024-11-20 10:32:12.157245] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:31.489 [2024-11-20 10:32:12.157248] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:31.489 [2024-11-20 10:32:12.157254] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:31.489 [2024-11-20 10:32:12.165207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:31.489 [2024-11-20 10:32:12.165220] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.165227] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.165233] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:31.489 [2024-11-20 10:32:12.165237] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:31.489 [2024-11-20 10:32:12.165240] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:31.489 [2024-11-20 10:32:12.165246] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:31.489 [2024-11-20 10:32:12.173206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:31.489 [2024-11-20 10:32:12.173215] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.173224] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported log pages (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.173231] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set supported features (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.173236] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host behavior support feature (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.173240] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.173245] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to set host ID (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.173249] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] NVMe-oF transport - not sending Set Features - Host ID 00:14:31.489 [2024-11-20 10:32:12.173253] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to transport ready (timeout 30000 ms) 00:14:31.489 [2024-11-20 10:32:12.173257] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] setting state to ready (no timeout) 00:14:31.489 [2024-11-20 10:32:12.173272] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:31.489 [2024-11-20 10:32:12.181207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:31.489 [2024-11-20 10:32:12.181219] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:31.489 [2024-11-20 10:32:12.189206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:31.489 [2024-11-20 10:32:12.189217] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:31.489 [2024-11-20 10:32:12.197207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:31.489 [2024-11-20 10:32:12.197218] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:31.490 [2024-11-20 10:32:12.205207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:31.490 [2024-11-20 10:32:12.205222] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:31.490 [2024-11-20 10:32:12.205226] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:31.490 [2024-11-20 10:32:12.205229] nvme_pcie_common.c:1275:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:31.490 [2024-11-20 10:32:12.205232] nvme_pcie_common.c:1291:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:31.490 [2024-11-20 10:32:12.205235] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:31.490 [2024-11-20 10:32:12.205240] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:31.490 [2024-11-20 10:32:12.205247] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:31.490 [2024-11-20 10:32:12.205251] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:31.490 [2024-11-20 10:32:12.205253] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:31.490 [2024-11-20 10:32:12.205259] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:31.490 [2024-11-20 10:32:12.205265] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:31.490 [2024-11-20 10:32:12.205270] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:31.490 [2024-11-20 10:32:12.205273] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:31.490 [2024-11-20 10:32:12.205279] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:31.490 [2024-11-20 10:32:12.205285] nvme_pcie_common.c:1238:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:31.490 [2024-11-20 10:32:12.205289] nvme_pcie_common.c:1266:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:31.490 [2024-11-20 10:32:12.205292] nvme_pcie_common.c:1326:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:31.490 [2024-11-20 10:32:12.205297] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:31.490 [2024-11-20 10:32:12.213208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:31.490 [2024-11-20 10:32:12.213221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:31.490 [2024-11-20 10:32:12.213230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:31.490 [2024-11-20 10:32:12.213236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:31.490 ===================================================== 00:14:31.490 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:31.490 ===================================================== 00:14:31.490 Controller Capabilities/Features 00:14:31.490 ================================ 00:14:31.490 Vendor ID: 4e58 00:14:31.490 Subsystem Vendor ID: 4e58 00:14:31.490 Serial Number: SPDK2 00:14:31.490 Model Number: SPDK bdev Controller 00:14:31.490 Firmware Version: 25.01 00:14:31.490 Recommended Arb Burst: 6 00:14:31.490 IEEE OUI Identifier: 8d 6b 50 00:14:31.490 Multi-path I/O 00:14:31.490 May have multiple subsystem ports: Yes 00:14:31.490 May have multiple controllers: Yes 00:14:31.490 Associated with SR-IOV VF: No 00:14:31.490 Max Data Transfer Size: 131072 00:14:31.490 Max Number of Namespaces: 32 00:14:31.490 Max Number of I/O Queues: 127 00:14:31.490 NVMe Specification Version (VS): 1.3 00:14:31.490 NVMe Specification Version (Identify): 1.3 00:14:31.490 Maximum Queue Entries: 256 00:14:31.490 Contiguous Queues Required: Yes 00:14:31.490 Arbitration Mechanisms Supported 00:14:31.490 Weighted Round Robin: Not Supported 00:14:31.490 Vendor Specific: Not Supported 00:14:31.490 Reset Timeout: 15000 ms 00:14:31.490 Doorbell Stride: 4 bytes 00:14:31.490 NVM Subsystem Reset: Not Supported 00:14:31.490 Command Sets Supported 00:14:31.490 NVM Command Set: Supported 00:14:31.490 Boot Partition: Not Supported 00:14:31.490 Memory Page Size Minimum: 4096 bytes 00:14:31.490 Memory Page Size Maximum: 4096 bytes 00:14:31.490 Persistent Memory Region: Not Supported 00:14:31.490 Optional Asynchronous Events Supported 00:14:31.490 Namespace Attribute Notices: Supported 00:14:31.490 Firmware Activation Notices: Not Supported 00:14:31.490 ANA Change Notices: Not Supported 00:14:31.490 PLE Aggregate Log Change Notices: Not Supported 00:14:31.490 LBA Status Info Alert Notices: Not Supported 00:14:31.490 EGE Aggregate Log Change Notices: Not Supported 00:14:31.490 Normal NVM Subsystem Shutdown event: Not Supported 00:14:31.490 Zone Descriptor Change Notices: Not Supported 00:14:31.490 Discovery Log Change Notices: Not Supported 00:14:31.490 Controller Attributes 00:14:31.490 128-bit Host Identifier: Supported 00:14:31.490 Non-Operational Permissive Mode: Not Supported 00:14:31.490 NVM Sets: Not Supported 00:14:31.490 Read Recovery Levels: Not Supported 00:14:31.490 Endurance Groups: Not Supported 00:14:31.490 Predictable Latency Mode: Not Supported 00:14:31.490 Traffic Based Keep ALive: Not Supported 00:14:31.490 Namespace Granularity: Not Supported 00:14:31.490 SQ Associations: Not Supported 00:14:31.490 UUID List: Not Supported 00:14:31.490 Multi-Domain Subsystem: Not Supported 00:14:31.490 Fixed Capacity Management: Not Supported 00:14:31.490 Variable Capacity Management: Not Supported 00:14:31.490 Delete Endurance Group: Not Supported 00:14:31.490 Delete NVM Set: Not Supported 00:14:31.490 Extended LBA Formats Supported: Not Supported 00:14:31.490 Flexible Data Placement Supported: Not Supported 00:14:31.490 00:14:31.490 Controller Memory Buffer Support 00:14:31.490 ================================ 00:14:31.490 Supported: No 00:14:31.490 00:14:31.490 Persistent Memory Region Support 00:14:31.490 ================================ 00:14:31.490 Supported: No 00:14:31.490 00:14:31.490 Admin Command Set Attributes 00:14:31.490 ============================ 00:14:31.490 Security Send/Receive: Not Supported 00:14:31.490 Format NVM: Not Supported 00:14:31.490 Firmware Activate/Download: Not Supported 00:14:31.490 Namespace Management: Not Supported 00:14:31.490 Device Self-Test: Not Supported 00:14:31.490 Directives: Not Supported 00:14:31.490 NVMe-MI: Not Supported 00:14:31.490 Virtualization Management: Not Supported 00:14:31.490 Doorbell Buffer Config: Not Supported 00:14:31.490 Get LBA Status Capability: Not Supported 00:14:31.490 Command & Feature Lockdown Capability: Not Supported 00:14:31.490 Abort Command Limit: 4 00:14:31.490 Async Event Request Limit: 4 00:14:31.490 Number of Firmware Slots: N/A 00:14:31.490 Firmware Slot 1 Read-Only: N/A 00:14:31.490 Firmware Activation Without Reset: N/A 00:14:31.490 Multiple Update Detection Support: N/A 00:14:31.490 Firmware Update Granularity: No Information Provided 00:14:31.490 Per-Namespace SMART Log: No 00:14:31.490 Asymmetric Namespace Access Log Page: Not Supported 00:14:31.490 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:14:31.490 Command Effects Log Page: Supported 00:14:31.490 Get Log Page Extended Data: Supported 00:14:31.490 Telemetry Log Pages: Not Supported 00:14:31.490 Persistent Event Log Pages: Not Supported 00:14:31.490 Supported Log Pages Log Page: May Support 00:14:31.490 Commands Supported & Effects Log Page: Not Supported 00:14:31.490 Feature Identifiers & Effects Log Page:May Support 00:14:31.490 NVMe-MI Commands & Effects Log Page: May Support 00:14:31.490 Data Area 4 for Telemetry Log: Not Supported 00:14:31.491 Error Log Page Entries Supported: 128 00:14:31.491 Keep Alive: Supported 00:14:31.491 Keep Alive Granularity: 10000 ms 00:14:31.491 00:14:31.491 NVM Command Set Attributes 00:14:31.491 ========================== 00:14:31.491 Submission Queue Entry Size 00:14:31.491 Max: 64 00:14:31.491 Min: 64 00:14:31.491 Completion Queue Entry Size 00:14:31.491 Max: 16 00:14:31.491 Min: 16 00:14:31.491 Number of Namespaces: 32 00:14:31.491 Compare Command: Supported 00:14:31.491 Write Uncorrectable Command: Not Supported 00:14:31.491 Dataset Management Command: Supported 00:14:31.491 Write Zeroes Command: Supported 00:14:31.491 Set Features Save Field: Not Supported 00:14:31.491 Reservations: Not Supported 00:14:31.491 Timestamp: Not Supported 00:14:31.491 Copy: Supported 00:14:31.491 Volatile Write Cache: Present 00:14:31.491 Atomic Write Unit (Normal): 1 00:14:31.491 Atomic Write Unit (PFail): 1 00:14:31.491 Atomic Compare & Write Unit: 1 00:14:31.491 Fused Compare & Write: Supported 00:14:31.491 Scatter-Gather List 00:14:31.491 SGL Command Set: Supported (Dword aligned) 00:14:31.491 SGL Keyed: Not Supported 00:14:31.491 SGL Bit Bucket Descriptor: Not Supported 00:14:31.491 SGL Metadata Pointer: Not Supported 00:14:31.491 Oversized SGL: Not Supported 00:14:31.491 SGL Metadata Address: Not Supported 00:14:31.491 SGL Offset: Not Supported 00:14:31.491 Transport SGL Data Block: Not Supported 00:14:31.491 Replay Protected Memory Block: Not Supported 00:14:31.491 00:14:31.491 Firmware Slot Information 00:14:31.491 ========================= 00:14:31.491 Active slot: 1 00:14:31.491 Slot 1 Firmware Revision: 25.01 00:14:31.491 00:14:31.491 00:14:31.491 Commands Supported and Effects 00:14:31.491 ============================== 00:14:31.491 Admin Commands 00:14:31.491 -------------- 00:14:31.491 Get Log Page (02h): Supported 00:14:31.491 Identify (06h): Supported 00:14:31.491 Abort (08h): Supported 00:14:31.491 Set Features (09h): Supported 00:14:31.491 Get Features (0Ah): Supported 00:14:31.491 Asynchronous Event Request (0Ch): Supported 00:14:31.491 Keep Alive (18h): Supported 00:14:31.491 I/O Commands 00:14:31.491 ------------ 00:14:31.491 Flush (00h): Supported LBA-Change 00:14:31.491 Write (01h): Supported LBA-Change 00:14:31.491 Read (02h): Supported 00:14:31.491 Compare (05h): Supported 00:14:31.491 Write Zeroes (08h): Supported LBA-Change 00:14:31.491 Dataset Management (09h): Supported LBA-Change 00:14:31.491 Copy (19h): Supported LBA-Change 00:14:31.491 00:14:31.491 Error Log 00:14:31.491 ========= 00:14:31.491 00:14:31.491 Arbitration 00:14:31.491 =========== 00:14:31.491 Arbitration Burst: 1 00:14:31.491 00:14:31.491 Power Management 00:14:31.491 ================ 00:14:31.491 Number of Power States: 1 00:14:31.491 Current Power State: Power State #0 00:14:31.491 Power State #0: 00:14:31.491 Max Power: 0.00 W 00:14:31.491 Non-Operational State: Operational 00:14:31.491 Entry Latency: Not Reported 00:14:31.491 Exit Latency: Not Reported 00:14:31.491 Relative Read Throughput: 0 00:14:31.491 Relative Read Latency: 0 00:14:31.491 Relative Write Throughput: 0 00:14:31.491 Relative Write Latency: 0 00:14:31.491 Idle Power: Not Reported 00:14:31.491 Active Power: Not Reported 00:14:31.491 Non-Operational Permissive Mode: Not Supported 00:14:31.491 00:14:31.491 Health Information 00:14:31.491 ================== 00:14:31.491 Critical Warnings: 00:14:31.491 Available Spare Space: OK 00:14:31.491 Temperature: OK 00:14:31.491 Device Reliability: OK 00:14:31.491 Read Only: No 00:14:31.491 Volatile Memory Backup: OK 00:14:31.491 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:31.491 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:31.491 Available Spare: 0% 00:14:31.491 Available Sp[2024-11-20 10:32:12.213324] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:31.750 [2024-11-20 10:32:12.221208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:31.750 [2024-11-20 10:32:12.221236] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] Prepare to destruct SSD 00:14:31.750 [2024-11-20 10:32:12.221245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.750 [2024-11-20 10:32:12.221250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.750 [2024-11-20 10:32:12.221256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.750 [2024-11-20 10:32:12.221261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:31.750 [2024-11-20 10:32:12.221299] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:14:31.750 [2024-11-20 10:32:12.221308] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:14:31.750 [2024-11-20 10:32:12.222298] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:31.750 [2024-11-20 10:32:12.222339] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] RTD3E = 0 us 00:14:31.750 [2024-11-20 10:32:12.222346] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown timeout = 10000 ms 00:14:31.750 [2024-11-20 10:32:12.223302] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:14:31.750 [2024-11-20 10:32:12.223313] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2, 0] shutdown complete in 0 milliseconds 00:14:31.750 [2024-11-20 10:32:12.223358] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:14:31.750 [2024-11-20 10:32:12.224318] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:31.750 are Threshold: 0% 00:14:31.750 Life Percentage Used: 0% 00:14:31.750 Data Units Read: 0 00:14:31.750 Data Units Written: 0 00:14:31.750 Host Read Commands: 0 00:14:31.750 Host Write Commands: 0 00:14:31.750 Controller Busy Time: 0 minutes 00:14:31.750 Power Cycles: 0 00:14:31.750 Power On Hours: 0 hours 00:14:31.750 Unsafe Shutdowns: 0 00:14:31.750 Unrecoverable Media Errors: 0 00:14:31.750 Lifetime Error Log Entries: 0 00:14:31.750 Warning Temperature Time: 0 minutes 00:14:31.750 Critical Temperature Time: 0 minutes 00:14:31.750 00:14:31.750 Number of Queues 00:14:31.750 ================ 00:14:31.750 Number of I/O Submission Queues: 127 00:14:31.750 Number of I/O Completion Queues: 127 00:14:31.750 00:14:31.750 Active Namespaces 00:14:31.750 ================= 00:14:31.750 Namespace ID:1 00:14:31.750 Error Recovery Timeout: Unlimited 00:14:31.750 Command Set Identifier: NVM (00h) 00:14:31.750 Deallocate: Supported 00:14:31.750 Deallocated/Unwritten Error: Not Supported 00:14:31.750 Deallocated Read Value: Unknown 00:14:31.750 Deallocate in Write Zeroes: Not Supported 00:14:31.750 Deallocated Guard Field: 0xFFFF 00:14:31.750 Flush: Supported 00:14:31.750 Reservation: Supported 00:14:31.750 Namespace Sharing Capabilities: Multiple Controllers 00:14:31.750 Size (in LBAs): 131072 (0GiB) 00:14:31.750 Capacity (in LBAs): 131072 (0GiB) 00:14:31.750 Utilization (in LBAs): 131072 (0GiB) 00:14:31.750 NGUID: B9E56A33AA254C99973399D9D4CAEC3A 00:14:31.750 UUID: b9e56a33-aa25-4c99-9733-99d9d4caec3a 00:14:31.750 Thin Provisioning: Not Supported 00:14:31.750 Per-NS Atomic Units: Yes 00:14:31.750 Atomic Boundary Size (Normal): 0 00:14:31.750 Atomic Boundary Size (PFail): 0 00:14:31.750 Atomic Boundary Offset: 0 00:14:31.750 Maximum Single Source Range Length: 65535 00:14:31.750 Maximum Copy Length: 65535 00:14:31.750 Maximum Source Range Count: 1 00:14:31.750 NGUID/EUI64 Never Reused: No 00:14:31.750 Namespace Write Protected: No 00:14:31.750 Number of LBA Formats: 1 00:14:31.750 Current LBA Format: LBA Format #00 00:14:31.751 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:31.751 00:14:31.751 10:32:12 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:31.751 [2024-11-20 10:32:12.453573] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:37.023 Initializing NVMe Controllers 00:14:37.023 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:37.023 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:37.023 Initialization complete. Launching workers. 00:14:37.023 ======================================================== 00:14:37.023 Latency(us) 00:14:37.023 Device Information : IOPS MiB/s Average min max 00:14:37.023 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39862.08 155.71 3210.89 940.22 10643.64 00:14:37.023 ======================================================== 00:14:37.023 Total : 39862.08 155.71 3210.89 940.22 10643.64 00:14:37.023 00:14:37.023 [2024-11-20 10:32:17.559470] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:37.023 10:32:17 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:37.282 [2024-11-20 10:32:17.789103] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:42.550 Initializing NVMe Controllers 00:14:42.550 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:42.550 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:14:42.550 Initialization complete. Launching workers. 00:14:42.550 ======================================================== 00:14:42.550 Latency(us) 00:14:42.550 Device Information : IOPS MiB/s Average min max 00:14:42.551 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39917.58 155.93 3206.21 927.01 7385.48 00:14:42.551 ======================================================== 00:14:42.551 Total : 39917.58 155.93 3206.21 927.01 7385.48 00:14:42.551 00:14:42.551 [2024-11-20 10:32:22.805140] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:42.551 10:32:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:14:42.551 [2024-11-20 10:32:23.015386] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:47.820 [2024-11-20 10:32:28.157294] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:47.820 Initializing NVMe Controllers 00:14:47.820 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:47.820 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:14:47.820 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:14:47.820 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:14:47.820 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:14:47.820 Initialization complete. Launching workers. 00:14:47.820 Starting thread on core 2 00:14:47.820 Starting thread on core 3 00:14:47.820 Starting thread on core 1 00:14:47.820 10:32:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:14:47.820 [2024-11-20 10:32:28.455642] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:51.108 [2024-11-20 10:32:31.521290] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:51.108 Initializing NVMe Controllers 00:14:51.108 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:51.108 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:51.108 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:14:51.108 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:14:51.108 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:14:51.108 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:14:51.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:14:51.108 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:14:51.108 Initialization complete. Launching workers. 00:14:51.108 Starting thread on core 1 with urgent priority queue 00:14:51.108 Starting thread on core 2 with urgent priority queue 00:14:51.108 Starting thread on core 3 with urgent priority queue 00:14:51.108 Starting thread on core 0 with urgent priority queue 00:14:51.108 SPDK bdev Controller (SPDK2 ) core 0: 8147.00 IO/s 12.27 secs/100000 ios 00:14:51.108 SPDK bdev Controller (SPDK2 ) core 1: 8343.33 IO/s 11.99 secs/100000 ios 00:14:51.108 SPDK bdev Controller (SPDK2 ) core 2: 8866.33 IO/s 11.28 secs/100000 ios 00:14:51.108 SPDK bdev Controller (SPDK2 ) core 3: 6851.67 IO/s 14.59 secs/100000 ios 00:14:51.108 ======================================================== 00:14:51.108 00:14:51.108 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:51.108 [2024-11-20 10:32:31.810654] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:51.108 Initializing NVMe Controllers 00:14:51.108 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:51.108 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:51.108 Namespace ID: 1 size: 0GB 00:14:51.108 Initialization complete. 00:14:51.108 INFO: using host memory buffer for IO 00:14:51.108 Hello world! 00:14:51.108 [2024-11-20 10:32:31.820706] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:51.368 10:32:31 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:14:51.626 [2024-11-20 10:32:32.098016] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:52.564 Initializing NVMe Controllers 00:14:52.564 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:52.564 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:52.564 Initialization complete. Launching workers. 00:14:52.564 submit (in ns) avg, min, max = 7514.5, 3164.8, 3999267.6 00:14:52.564 complete (in ns) avg, min, max = 21256.9, 1737.1, 4994657.1 00:14:52.564 00:14:52.564 Submit histogram 00:14:52.564 ================ 00:14:52.564 Range in us Cumulative Count 00:14:52.564 3.154 - 3.170: 0.0061% ( 1) 00:14:52.564 3.170 - 3.185: 0.0121% ( 1) 00:14:52.564 3.185 - 3.200: 0.0545% ( 7) 00:14:52.564 3.200 - 3.215: 0.1211% ( 11) 00:14:52.564 3.215 - 3.230: 0.3089% ( 31) 00:14:52.564 3.230 - 3.246: 1.1750% ( 143) 00:14:52.564 3.246 - 3.261: 4.3852% ( 530) 00:14:52.564 3.261 - 3.276: 10.5451% ( 1017) 00:14:52.564 3.276 - 3.291: 16.6445% ( 1007) 00:14:52.564 3.291 - 3.307: 23.6281% ( 1153) 00:14:52.564 3.307 - 3.322: 30.6118% ( 1153) 00:14:52.564 3.322 - 3.337: 36.3961% ( 955) 00:14:52.564 3.337 - 3.352: 42.1320% ( 947) 00:14:52.564 3.352 - 3.368: 47.3895% ( 868) 00:14:52.564 3.368 - 3.383: 52.9679% ( 921) 00:14:52.564 3.383 - 3.398: 58.1950% ( 863) 00:14:52.564 3.398 - 3.413: 65.4391% ( 1196) 00:14:52.564 3.413 - 3.429: 72.5257% ( 1170) 00:14:52.564 3.429 - 3.444: 77.1351% ( 761) 00:14:52.564 3.444 - 3.459: 81.8292% ( 775) 00:14:52.564 3.459 - 3.474: 84.8092% ( 492) 00:14:52.564 3.474 - 3.490: 86.6505% ( 304) 00:14:52.564 3.490 - 3.505: 87.5893% ( 155) 00:14:52.564 3.505 - 3.520: 88.0194% ( 71) 00:14:52.564 3.520 - 3.535: 88.2556% ( 39) 00:14:52.564 3.535 - 3.550: 88.7280% ( 78) 00:14:52.564 3.550 - 3.566: 89.4670% ( 122) 00:14:52.564 3.566 - 3.581: 90.2302% ( 126) 00:14:52.564 3.581 - 3.596: 91.1811% ( 157) 00:14:52.564 3.596 - 3.611: 92.1502% ( 160) 00:14:52.564 3.611 - 3.627: 93.0709% ( 152) 00:14:52.564 3.627 - 3.642: 94.0218% ( 157) 00:14:52.564 3.642 - 3.657: 94.9546% ( 154) 00:14:52.564 3.657 - 3.672: 95.9116% ( 158) 00:14:52.564 3.672 - 3.688: 96.7656% ( 141) 00:14:52.564 3.688 - 3.703: 97.6015% ( 138) 00:14:52.564 3.703 - 3.718: 98.0860% ( 80) 00:14:52.564 3.718 - 3.733: 98.4797% ( 65) 00:14:52.564 3.733 - 3.749: 98.7523% ( 45) 00:14:52.564 3.749 - 3.764: 99.0248% ( 45) 00:14:52.564 3.764 - 3.779: 99.2429% ( 36) 00:14:52.564 3.779 - 3.794: 99.3882% ( 24) 00:14:52.564 3.794 - 3.810: 99.4609% ( 12) 00:14:52.564 3.810 - 3.825: 99.5397% ( 13) 00:14:52.564 3.825 - 3.840: 99.5760% ( 6) 00:14:52.564 3.840 - 3.855: 99.5821% ( 1) 00:14:52.564 3.855 - 3.870: 99.5942% ( 2) 00:14:52.564 3.870 - 3.886: 99.6063% ( 2) 00:14:52.564 5.211 - 5.242: 99.6124% ( 1) 00:14:52.564 5.242 - 5.272: 99.6184% ( 1) 00:14:52.564 5.303 - 5.333: 99.6305% ( 2) 00:14:52.565 5.333 - 5.364: 99.6366% ( 1) 00:14:52.565 5.394 - 5.425: 99.6426% ( 1) 00:14:52.565 5.455 - 5.486: 99.6487% ( 1) 00:14:52.565 5.486 - 5.516: 99.6548% ( 1) 00:14:52.565 5.547 - 5.577: 99.6790% ( 4) 00:14:52.565 5.608 - 5.638: 99.6850% ( 1) 00:14:52.565 5.669 - 5.699: 99.6972% ( 2) 00:14:52.565 5.699 - 5.730: 99.7032% ( 1) 00:14:52.565 5.821 - 5.851: 99.7093% ( 1) 00:14:52.565 5.851 - 5.882: 99.7274% ( 3) 00:14:52.565 5.912 - 5.943: 99.7335% ( 1) 00:14:52.565 5.943 - 5.973: 99.7456% ( 2) 00:14:52.565 6.004 - 6.034: 99.7577% ( 2) 00:14:52.565 6.065 - 6.095: 99.7638% ( 1) 00:14:52.565 6.095 - 6.126: 99.7698% ( 1) 00:14:52.565 6.187 - 6.217: 99.7759% ( 1) 00:14:52.565 6.339 - 6.370: 99.7880% ( 2) 00:14:52.565 6.370 - 6.400: 99.7941% ( 1) 00:14:52.565 6.430 - 6.461: 99.8001% ( 1) 00:14:52.565 6.522 - 6.552: 99.8062% ( 1) 00:14:52.565 6.552 - 6.583: 99.8122% ( 1) 00:14:52.565 6.705 - 6.735: 99.8183% ( 1) 00:14:52.565 6.766 - 6.796: 99.8243% ( 1) 00:14:52.565 6.888 - 6.918: 99.8304% ( 1) 00:14:52.565 6.918 - 6.949: 99.8365% ( 1) 00:14:52.565 6.979 - 7.010: 99.8425% ( 1) 00:14:52.565 7.345 - 7.375: 99.8486% ( 1) 00:14:52.565 7.467 - 7.497: 99.8546% ( 1) 00:14:52.565 [2024-11-20 10:32:33.189160] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:52.565 7.741 - 7.771: 99.8607% ( 1) 00:14:52.565 8.229 - 8.290: 99.8728% ( 2) 00:14:52.565 8.290 - 8.350: 99.8789% ( 1) 00:14:52.565 9.265 - 9.326: 99.8849% ( 1) 00:14:52.565 9.570 - 9.630: 99.8910% ( 1) 00:14:52.565 18.895 - 19.017: 99.8970% ( 1) 00:14:52.565 3994.575 - 4025.783: 100.0000% ( 17) 00:14:52.565 00:14:52.565 Complete histogram 00:14:52.565 ================== 00:14:52.565 Range in us Cumulative Count 00:14:52.565 1.737 - 1.745: 0.0121% ( 2) 00:14:52.565 1.745 - 1.752: 0.0182% ( 1) 00:14:52.565 1.752 - 1.760: 0.0242% ( 1) 00:14:52.565 1.760 - 1.768: 0.0787% ( 9) 00:14:52.565 1.768 - 1.775: 0.2544% ( 29) 00:14:52.565 1.775 - 1.783: 0.5270% ( 45) 00:14:52.565 1.783 - 1.790: 1.0055% ( 79) 00:14:52.565 1.790 - 1.798: 1.8898% ( 146) 00:14:52.565 1.798 - 1.806: 2.7196% ( 137) 00:14:52.565 1.806 - 1.813: 3.8340% ( 184) 00:14:52.565 1.813 - 1.821: 12.1563% ( 1374) 00:14:52.565 1.821 - 1.829: 39.0551% ( 4441) 00:14:52.565 1.829 - 1.836: 67.7468% ( 4737) 00:14:52.565 1.836 - 1.844: 80.6118% ( 2124) 00:14:52.565 1.844 - 1.851: 84.9425% ( 715) 00:14:52.565 1.851 - 1.859: 87.5833% ( 436) 00:14:52.565 1.859 - 1.867: 89.4791% ( 313) 00:14:52.565 1.867 - 1.874: 91.3870% ( 315) 00:14:52.565 1.874 - 1.882: 93.6644% ( 376) 00:14:52.565 1.882 - 1.890: 95.6996% ( 336) 00:14:52.565 1.890 - 1.897: 97.0624% ( 225) 00:14:52.565 1.897 - 1.905: 97.7226% ( 109) 00:14:52.565 1.905 - 1.912: 98.3525% ( 104) 00:14:52.565 1.912 - 1.920: 98.7038% ( 58) 00:14:52.565 1.920 - 1.928: 98.8916% ( 31) 00:14:52.565 1.928 - 1.935: 99.0067% ( 19) 00:14:52.565 1.935 - 1.943: 99.0551% ( 8) 00:14:52.565 1.943 - 1.950: 99.1157% ( 10) 00:14:52.565 1.950 - 1.966: 99.2247% ( 18) 00:14:52.565 1.966 - 1.981: 99.2671% ( 7) 00:14:52.565 1.981 - 1.996: 99.2732% ( 1) 00:14:52.565 1.996 - 2.011: 99.2853% ( 2) 00:14:52.565 2.011 - 2.027: 99.2913% ( 1) 00:14:52.565 2.133 - 2.149: 99.2974% ( 1) 00:14:52.565 2.179 - 2.194: 99.3035% ( 1) 00:14:52.565 3.657 - 3.672: 99.3095% ( 1) 00:14:52.565 3.764 - 3.779: 99.3216% ( 2) 00:14:52.565 3.794 - 3.810: 99.3277% ( 1) 00:14:52.565 3.810 - 3.825: 99.3337% ( 1) 00:14:52.565 3.886 - 3.901: 99.3459% ( 2) 00:14:52.565 3.931 - 3.962: 99.3519% ( 1) 00:14:52.565 3.962 - 3.992: 99.3580% ( 1) 00:14:52.565 3.992 - 4.023: 99.3640% ( 1) 00:14:52.565 4.053 - 4.084: 99.3701% ( 1) 00:14:52.565 4.114 - 4.145: 99.3761% ( 1) 00:14:52.565 4.206 - 4.236: 99.3822% ( 1) 00:14:52.565 4.358 - 4.389: 99.3882% ( 1) 00:14:52.565 4.419 - 4.450: 99.3943% ( 1) 00:14:52.565 4.480 - 4.510: 99.4004% ( 1) 00:14:52.565 4.724 - 4.754: 99.4064% ( 1) 00:14:52.565 4.876 - 4.907: 99.4125% ( 1) 00:14:52.565 4.968 - 4.998: 99.4185% ( 1) 00:14:52.565 4.998 - 5.029: 99.4246% ( 1) 00:14:52.565 5.090 - 5.120: 99.4306% ( 1) 00:14:52.565 5.181 - 5.211: 99.4428% ( 2) 00:14:52.565 5.303 - 5.333: 99.4609% ( 3) 00:14:52.565 5.638 - 5.669: 99.4670% ( 1) 00:14:52.565 5.821 - 5.851: 99.4730% ( 1) 00:14:52.565 6.065 - 6.095: 99.4791% ( 1) 00:14:52.565 6.217 - 6.248: 99.4852% ( 1) 00:14:52.565 6.370 - 6.400: 99.4912% ( 1) 00:14:52.565 7.192 - 7.223: 99.4973% ( 1) 00:14:52.565 7.223 - 7.253: 99.5033% ( 1) 00:14:52.565 11.520 - 11.581: 99.5094% ( 1) 00:14:52.565 13.166 - 13.227: 99.5154% ( 1) 00:14:52.565 3994.575 - 4025.783: 99.9939% ( 79) 00:14:52.565 4993.219 - 5024.427: 100.0000% ( 1) 00:14:52.565 00:14:52.565 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:14:52.565 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:14:52.565 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:14:52.565 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:14:52.565 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:52.850 [ 00:14:52.850 { 00:14:52.851 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:52.851 "subtype": "Discovery", 00:14:52.851 "listen_addresses": [], 00:14:52.851 "allow_any_host": true, 00:14:52.851 "hosts": [] 00:14:52.851 }, 00:14:52.851 { 00:14:52.851 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:52.851 "subtype": "NVMe", 00:14:52.851 "listen_addresses": [ 00:14:52.851 { 00:14:52.851 "trtype": "VFIOUSER", 00:14:52.851 "adrfam": "IPv4", 00:14:52.851 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:52.851 "trsvcid": "0" 00:14:52.851 } 00:14:52.851 ], 00:14:52.851 "allow_any_host": true, 00:14:52.851 "hosts": [], 00:14:52.851 "serial_number": "SPDK1", 00:14:52.851 "model_number": "SPDK bdev Controller", 00:14:52.851 "max_namespaces": 32, 00:14:52.851 "min_cntlid": 1, 00:14:52.851 "max_cntlid": 65519, 00:14:52.851 "namespaces": [ 00:14:52.851 { 00:14:52.851 "nsid": 1, 00:14:52.851 "bdev_name": "Malloc1", 00:14:52.851 "name": "Malloc1", 00:14:52.851 "nguid": "432B9508B9A8491DA4CAC917F1F4A5E0", 00:14:52.851 "uuid": "432b9508-b9a8-491d-a4ca-c917f1f4a5e0" 00:14:52.851 }, 00:14:52.851 { 00:14:52.851 "nsid": 2, 00:14:52.851 "bdev_name": "Malloc3", 00:14:52.851 "name": "Malloc3", 00:14:52.851 "nguid": "19FCFD5B822740DFB88C39D71EF1A574", 00:14:52.851 "uuid": "19fcfd5b-8227-40df-b88c-39d71ef1a574" 00:14:52.851 } 00:14:52.851 ] 00:14:52.851 }, 00:14:52.851 { 00:14:52.851 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:52.851 "subtype": "NVMe", 00:14:52.851 "listen_addresses": [ 00:14:52.851 { 00:14:52.851 "trtype": "VFIOUSER", 00:14:52.851 "adrfam": "IPv4", 00:14:52.851 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:52.851 "trsvcid": "0" 00:14:52.851 } 00:14:52.851 ], 00:14:52.851 "allow_any_host": true, 00:14:52.851 "hosts": [], 00:14:52.851 "serial_number": "SPDK2", 00:14:52.851 "model_number": "SPDK bdev Controller", 00:14:52.851 "max_namespaces": 32, 00:14:52.851 "min_cntlid": 1, 00:14:52.851 "max_cntlid": 65519, 00:14:52.851 "namespaces": [ 00:14:52.851 { 00:14:52.851 "nsid": 1, 00:14:52.851 "bdev_name": "Malloc2", 00:14:52.851 "name": "Malloc2", 00:14:52.851 "nguid": "B9E56A33AA254C99973399D9D4CAEC3A", 00:14:52.851 "uuid": "b9e56a33-aa25-4c99-9733-99d9d4caec3a" 00:14:52.851 } 00:14:52.851 ] 00:14:52.851 } 00:14:52.851 ] 00:14:52.851 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:14:52.851 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=3195268 00:14:52.851 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:14:52.851 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:14:52.851 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1269 -- # local i=0 00:14:52.851 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:52.851 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:14:52.851 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1280 -- # return 0 00:14:52.851 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:14:52.851 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:14:53.110 [2024-11-20 10:32:33.600609] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:14:53.110 Malloc4 00:14:53.110 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:14:53.110 [2024-11-20 10:32:33.834258] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:14:53.368 10:32:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:14:53.368 Asynchronous Event Request test 00:14:53.368 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:14:53.368 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:14:53.368 Registering asynchronous event callbacks... 00:14:53.368 Starting namespace attribute notice tests for all controllers... 00:14:53.368 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:14:53.368 aer_cb - Changed Namespace 00:14:53.368 Cleaning up... 00:14:53.368 [ 00:14:53.368 { 00:14:53.368 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:14:53.368 "subtype": "Discovery", 00:14:53.368 "listen_addresses": [], 00:14:53.368 "allow_any_host": true, 00:14:53.368 "hosts": [] 00:14:53.368 }, 00:14:53.368 { 00:14:53.368 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:14:53.368 "subtype": "NVMe", 00:14:53.368 "listen_addresses": [ 00:14:53.368 { 00:14:53.368 "trtype": "VFIOUSER", 00:14:53.368 "adrfam": "IPv4", 00:14:53.368 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:14:53.368 "trsvcid": "0" 00:14:53.368 } 00:14:53.368 ], 00:14:53.368 "allow_any_host": true, 00:14:53.368 "hosts": [], 00:14:53.368 "serial_number": "SPDK1", 00:14:53.368 "model_number": "SPDK bdev Controller", 00:14:53.368 "max_namespaces": 32, 00:14:53.368 "min_cntlid": 1, 00:14:53.368 "max_cntlid": 65519, 00:14:53.368 "namespaces": [ 00:14:53.368 { 00:14:53.368 "nsid": 1, 00:14:53.368 "bdev_name": "Malloc1", 00:14:53.368 "name": "Malloc1", 00:14:53.368 "nguid": "432B9508B9A8491DA4CAC917F1F4A5E0", 00:14:53.368 "uuid": "432b9508-b9a8-491d-a4ca-c917f1f4a5e0" 00:14:53.368 }, 00:14:53.368 { 00:14:53.368 "nsid": 2, 00:14:53.368 "bdev_name": "Malloc3", 00:14:53.368 "name": "Malloc3", 00:14:53.368 "nguid": "19FCFD5B822740DFB88C39D71EF1A574", 00:14:53.368 "uuid": "19fcfd5b-8227-40df-b88c-39d71ef1a574" 00:14:53.368 } 00:14:53.368 ] 00:14:53.368 }, 00:14:53.368 { 00:14:53.368 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:14:53.368 "subtype": "NVMe", 00:14:53.368 "listen_addresses": [ 00:14:53.368 { 00:14:53.368 "trtype": "VFIOUSER", 00:14:53.368 "adrfam": "IPv4", 00:14:53.368 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:14:53.368 "trsvcid": "0" 00:14:53.368 } 00:14:53.368 ], 00:14:53.368 "allow_any_host": true, 00:14:53.368 "hosts": [], 00:14:53.368 "serial_number": "SPDK2", 00:14:53.368 "model_number": "SPDK bdev Controller", 00:14:53.368 "max_namespaces": 32, 00:14:53.369 "min_cntlid": 1, 00:14:53.369 "max_cntlid": 65519, 00:14:53.369 "namespaces": [ 00:14:53.369 { 00:14:53.369 "nsid": 1, 00:14:53.369 "bdev_name": "Malloc2", 00:14:53.369 "name": "Malloc2", 00:14:53.369 "nguid": "B9E56A33AA254C99973399D9D4CAEC3A", 00:14:53.369 "uuid": "b9e56a33-aa25-4c99-9733-99d9d4caec3a" 00:14:53.369 }, 00:14:53.369 { 00:14:53.369 "nsid": 2, 00:14:53.369 "bdev_name": "Malloc4", 00:14:53.369 "name": "Malloc4", 00:14:53.369 "nguid": "20B91751F3F54CB3B3E8202DBD3CC9AD", 00:14:53.369 "uuid": "20b91751-f3f5-4cb3-b3e8-202dbd3cc9ad" 00:14:53.369 } 00:14:53.369 ] 00:14:53.369 } 00:14:53.369 ] 00:14:53.369 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 3195268 00:14:53.369 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:14:53.369 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3187611 00:14:53.369 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3187611 ']' 00:14:53.369 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3187611 00:14:53.369 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:53.369 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:53.369 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3187611 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3187611' 00:14:53.628 killing process with pid 3187611 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3187611 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3187611 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=3195462 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 3195462' 00:14:53.628 Process pid: 3195462 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 3195462 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@835 -- # '[' -z 3195462 ']' 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:53.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:53.628 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:53.887 [2024-11-20 10:32:34.392272] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:14:53.887 [2024-11-20 10:32:34.393171] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:14:53.887 [2024-11-20 10:32:34.393220] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:53.887 [2024-11-20 10:32:34.467272] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:53.887 [2024-11-20 10:32:34.509066] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:53.887 [2024-11-20 10:32:34.509103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:53.887 [2024-11-20 10:32:34.509110] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:53.887 [2024-11-20 10:32:34.509116] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:53.887 [2024-11-20 10:32:34.509121] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:53.887 [2024-11-20 10:32:34.510581] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:53.887 [2024-11-20 10:32:34.510690] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:53.887 [2024-11-20 10:32:34.510795] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:53.887 [2024-11-20 10:32:34.510796] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:14:53.887 [2024-11-20 10:32:34.577499] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:14:53.887 [2024-11-20 10:32:34.578312] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:14:53.887 [2024-11-20 10:32:34.578512] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:14:53.887 [2024-11-20 10:32:34.578985] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:14:53.887 [2024-11-20 10:32:34.579032] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:14:53.887 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:53.887 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@868 -- # return 0 00:14:53.887 10:32:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:55.285 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:14:55.285 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:55.285 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:55.285 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:55.285 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:55.285 10:32:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:55.543 Malloc1 00:14:55.543 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:55.543 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:55.802 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:56.060 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:56.060 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:56.060 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:56.318 Malloc2 00:14:56.318 10:32:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:56.318 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:56.577 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:56.836 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:14:56.836 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 3195462 00:14:56.836 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # '[' -z 3195462 ']' 00:14:56.836 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # kill -0 3195462 00:14:56.836 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # uname 00:14:56.836 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:56.836 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3195462 00:14:56.836 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:56.836 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:56.836 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3195462' 00:14:56.836 killing process with pid 3195462 00:14:56.836 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@973 -- # kill 3195462 00:14:56.836 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@978 -- # wait 3195462 00:14:57.095 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:14:57.095 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:14:57.095 00:14:57.095 real 0m50.747s 00:14:57.095 user 3m16.222s 00:14:57.095 sys 0m3.284s 00:14:57.095 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:57.095 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:57.095 ************************************ 00:14:57.095 END TEST nvmf_vfio_user 00:14:57.095 ************************************ 00:14:57.095 10:32:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:57.095 10:32:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:14:57.095 10:32:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:57.095 10:32:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:57.095 ************************************ 00:14:57.095 START TEST nvmf_vfio_user_nvme_compliance 00:14:57.095 ************************************ 00:14:57.095 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:14:57.355 * Looking for test storage... 00:14:57.355 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lcov --version 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@333 -- # local ver1 ver1_l 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@334 -- # local ver2 ver2_l 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # IFS=.-: 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@336 -- # read -ra ver1 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # IFS=.-: 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@337 -- # read -ra ver2 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@338 -- # local 'op=<' 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@340 -- # ver1_l=2 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@341 -- # ver2_l=1 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@344 -- # case "$op" in 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@345 -- # : 1 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v = 0 )) 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # decimal 1 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=1 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 1 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@365 -- # ver1[v]=1 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # decimal 2 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@353 -- # local d=2 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@355 -- # echo 2 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@366 -- # ver2[v]=2 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@368 -- # return 0 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:14:57.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.355 --rc genhtml_branch_coverage=1 00:14:57.355 --rc genhtml_function_coverage=1 00:14:57.355 --rc genhtml_legend=1 00:14:57.355 --rc geninfo_all_blocks=1 00:14:57.355 --rc geninfo_unexecuted_blocks=1 00:14:57.355 00:14:57.355 ' 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:14:57.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.355 --rc genhtml_branch_coverage=1 00:14:57.355 --rc genhtml_function_coverage=1 00:14:57.355 --rc genhtml_legend=1 00:14:57.355 --rc geninfo_all_blocks=1 00:14:57.355 --rc geninfo_unexecuted_blocks=1 00:14:57.355 00:14:57.355 ' 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:14:57.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.355 --rc genhtml_branch_coverage=1 00:14:57.355 --rc genhtml_function_coverage=1 00:14:57.355 --rc genhtml_legend=1 00:14:57.355 --rc geninfo_all_blocks=1 00:14:57.355 --rc geninfo_unexecuted_blocks=1 00:14:57.355 00:14:57.355 ' 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:14:57.355 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:14:57.355 --rc genhtml_branch_coverage=1 00:14:57.355 --rc genhtml_function_coverage=1 00:14:57.355 --rc genhtml_legend=1 00:14:57.355 --rc geninfo_all_blocks=1 00:14:57.355 --rc geninfo_unexecuted_blocks=1 00:14:57.355 00:14:57.355 ' 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:14:57.355 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@15 -- # shopt -s extglob 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@50 -- # : 0 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:14:57.356 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@54 -- # have_pci_nics=0 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=3196229 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 3196229' 00:14:57.356 Process pid: 3196229 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 3196229 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@835 -- # '[' -z 3196229 ']' 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:57.356 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:57.356 10:32:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:57.356 [2024-11-20 10:32:38.016593] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:14:57.356 [2024-11-20 10:32:38.016640] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:57.615 [2024-11-20 10:32:38.090350] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:14:57.615 [2024-11-20 10:32:38.134317] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:57.615 [2024-11-20 10:32:38.134349] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:57.615 [2024-11-20 10:32:38.134356] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:57.615 [2024-11-20 10:32:38.134362] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:57.615 [2024-11-20 10:32:38.134367] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:57.615 [2024-11-20 10:32:38.135753] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:14:57.615 [2024-11-20 10:32:38.135858] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:14:57.615 [2024-11-20 10:32:38.135859] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:14:57.615 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:57.615 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@868 -- # return 0 00:14:57.615 10:32:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:14:58.602 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:14:58.602 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:14:58.602 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:14:58.602 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.602 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:58.602 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.602 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:14:58.602 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:14:58.602 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.602 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:58.602 malloc0 00:14:58.602 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.602 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:14:58.602 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.602 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:58.602 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.602 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:14:58.602 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.602 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:58.913 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.913 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:14:58.913 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:58.913 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:14:58.913 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:58.913 10:32:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:14:58.913 00:14:58.913 00:14:58.913 CUnit - A unit testing framework for C - Version 2.1-3 00:14:58.913 http://cunit.sourceforge.net/ 00:14:58.913 00:14:58.913 00:14:58.913 Suite: nvme_compliance 00:14:58.913 Test: admin_identify_ctrlr_verify_dptr ...[2024-11-20 10:32:39.479628] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.913 [2024-11-20 10:32:39.480979] vfio_user.c: 807:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:14:58.913 [2024-11-20 10:32:39.480994] vfio_user.c:5511:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:14:58.913 [2024-11-20 10:32:39.481000] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:14:58.913 [2024-11-20 10:32:39.482649] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.913 passed 00:14:58.913 Test: admin_identify_ctrlr_verify_fused ...[2024-11-20 10:32:39.563198] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:58.913 [2024-11-20 10:32:39.566226] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:58.913 passed 00:14:59.206 Test: admin_identify_ns ...[2024-11-20 10:32:39.642469] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.206 [2024-11-20 10:32:39.709216] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:14:59.206 [2024-11-20 10:32:39.717211] ctrlr.c:2750:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:14:59.206 [2024-11-20 10:32:39.738311] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.206 passed 00:14:59.206 Test: admin_get_features_mandatory_features ...[2024-11-20 10:32:39.815957] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.206 [2024-11-20 10:32:39.818978] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.206 passed 00:14:59.206 Test: admin_get_features_optional_features ...[2024-11-20 10:32:39.898515] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.206 [2024-11-20 10:32:39.901535] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.206 passed 00:14:59.465 Test: admin_set_features_number_of_queues ...[2024-11-20 10:32:39.976238] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.465 [2024-11-20 10:32:40.083355] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.465 passed 00:14:59.465 Test: admin_get_log_page_mandatory_logs ...[2024-11-20 10:32:40.160281] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.465 [2024-11-20 10:32:40.163308] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.723 passed 00:14:59.723 Test: admin_get_log_page_with_lpo ...[2024-11-20 10:32:40.241123] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.723 [2024-11-20 10:32:40.306215] ctrlr.c:2697:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:14:59.723 [2024-11-20 10:32:40.322281] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.723 passed 00:14:59.723 Test: fabric_property_get ...[2024-11-20 10:32:40.395129] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.723 [2024-11-20 10:32:40.396360] vfio_user.c:5604:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:14:59.723 [2024-11-20 10:32:40.398153] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.723 passed 00:14:59.982 Test: admin_delete_io_sq_use_admin_qid ...[2024-11-20 10:32:40.475673] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.982 [2024-11-20 10:32:40.476911] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:14:59.982 [2024-11-20 10:32:40.478687] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.982 passed 00:14:59.982 Test: admin_delete_io_sq_delete_sq_twice ...[2024-11-20 10:32:40.557430] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:14:59.982 [2024-11-20 10:32:40.642214] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:59.982 [2024-11-20 10:32:40.658216] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:14:59.982 [2024-11-20 10:32:40.663288] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:14:59.982 passed 00:15:00.240 Test: admin_delete_io_cq_use_admin_qid ...[2024-11-20 10:32:40.736281] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:00.240 [2024-11-20 10:32:40.737519] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:00.240 [2024-11-20 10:32:40.739305] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:00.240 passed 00:15:00.240 Test: admin_delete_io_cq_delete_cq_first ...[2024-11-20 10:32:40.814968] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:00.240 [2024-11-20 10:32:40.890207] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:00.240 [2024-11-20 10:32:40.914208] vfio_user.c:2312:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:00.240 [2024-11-20 10:32:40.919288] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:00.240 passed 00:15:00.499 Test: admin_create_io_cq_verify_iv_pc ...[2024-11-20 10:32:40.994898] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:00.499 [2024-11-20 10:32:40.996131] vfio_user.c:2161:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:00.499 [2024-11-20 10:32:40.996156] vfio_user.c:2155:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:00.499 [2024-11-20 10:32:41.000931] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:00.499 passed 00:15:00.499 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-11-20 10:32:41.073709] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:00.499 [2024-11-20 10:32:41.168211] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:00.499 [2024-11-20 10:32:41.176213] vfio_user.c:2243:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:00.499 [2024-11-20 10:32:41.184206] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:00.499 [2024-11-20 10:32:41.192212] vfio_user.c:2041:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:00.499 [2024-11-20 10:32:41.221314] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:00.758 passed 00:15:00.758 Test: admin_create_io_sq_verify_pc ...[2024-11-20 10:32:41.295160] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:00.758 [2024-11-20 10:32:41.310215] vfio_user.c:2054:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:00.758 [2024-11-20 10:32:41.330349] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:00.758 passed 00:15:00.758 Test: admin_create_io_qp_max_qps ...[2024-11-20 10:32:41.403864] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.134 [2024-11-20 10:32:42.513210] nvme_ctrlr.c:5523:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user, 0] No free I/O queue IDs 00:15:02.393 [2024-11-20 10:32:42.894756] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.393 passed 00:15:02.393 Test: admin_create_io_sq_shared_cq ...[2024-11-20 10:32:42.970595] vfio_user.c:2840:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:02.393 [2024-11-20 10:32:43.103207] vfio_user.c:2322:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:02.652 [2024-11-20 10:32:43.140257] vfio_user.c:2802:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:02.652 passed 00:15:02.652 00:15:02.652 Run Summary: Type Total Ran Passed Failed Inactive 00:15:02.652 suites 1 1 n/a 0 0 00:15:02.652 tests 18 18 18 0 0 00:15:02.652 asserts 360 360 360 0 n/a 00:15:02.652 00:15:02.652 Elapsed time = 1.505 seconds 00:15:02.652 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 3196229 00:15:02.652 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # '[' -z 3196229 ']' 00:15:02.652 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # kill -0 3196229 00:15:02.652 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # uname 00:15:02.652 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:02.652 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3196229 00:15:02.652 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:02.652 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:02.652 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3196229' 00:15:02.652 killing process with pid 3196229 00:15:02.652 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@973 -- # kill 3196229 00:15:02.652 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@978 -- # wait 3196229 00:15:02.910 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:02.910 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:02.910 00:15:02.910 real 0m5.658s 00:15:02.910 user 0m15.794s 00:15:02.910 sys 0m0.532s 00:15:02.910 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:02.910 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:02.910 ************************************ 00:15:02.911 END TEST nvmf_vfio_user_nvme_compliance 00:15:02.911 ************************************ 00:15:02.911 10:32:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:02.911 10:32:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:02.911 10:32:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:02.911 10:32:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:02.911 ************************************ 00:15:02.911 START TEST nvmf_vfio_user_fuzz 00:15:02.911 ************************************ 00:15:02.911 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:02.911 * Looking for test storage... 00:15:02.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:02.911 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:02.911 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lcov --version 00:15:02.911 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # IFS=.-: 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@336 -- # read -ra ver1 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # IFS=.-: 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@337 -- # read -ra ver2 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@338 -- # local 'op=<' 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@340 -- # ver1_l=2 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@341 -- # ver2_l=1 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@344 -- # case "$op" in 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@345 -- # : 1 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # decimal 1 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=1 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 1 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@365 -- # ver1[v]=1 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # decimal 2 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@353 -- # local d=2 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@355 -- # echo 2 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@366 -- # ver2[v]=2 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@368 -- # return 0 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:03.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.170 --rc genhtml_branch_coverage=1 00:15:03.170 --rc genhtml_function_coverage=1 00:15:03.170 --rc genhtml_legend=1 00:15:03.170 --rc geninfo_all_blocks=1 00:15:03.170 --rc geninfo_unexecuted_blocks=1 00:15:03.170 00:15:03.170 ' 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:03.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.170 --rc genhtml_branch_coverage=1 00:15:03.170 --rc genhtml_function_coverage=1 00:15:03.170 --rc genhtml_legend=1 00:15:03.170 --rc geninfo_all_blocks=1 00:15:03.170 --rc geninfo_unexecuted_blocks=1 00:15:03.170 00:15:03.170 ' 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:03.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.170 --rc genhtml_branch_coverage=1 00:15:03.170 --rc genhtml_function_coverage=1 00:15:03.170 --rc genhtml_legend=1 00:15:03.170 --rc geninfo_all_blocks=1 00:15:03.170 --rc geninfo_unexecuted_blocks=1 00:15:03.170 00:15:03.170 ' 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:03.170 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:03.170 --rc genhtml_branch_coverage=1 00:15:03.170 --rc genhtml_function_coverage=1 00:15:03.170 --rc genhtml_legend=1 00:15:03.170 --rc geninfo_all_blocks=1 00:15:03.170 --rc geninfo_unexecuted_blocks=1 00:15:03.170 00:15:03.170 ' 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:03.170 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@15 -- # shopt -s extglob 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@50 -- # : 0 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:03.171 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=3197216 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 3197216' 00:15:03.171 Process pid: 3197216 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 3197216 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@835 -- # '[' -z 3197216 ']' 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:03.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:03.171 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:03.429 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:03.429 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@868 -- # return 0 00:15:03.429 10:32:43 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:04.365 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:04.365 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.365 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:04.365 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.365 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:04.365 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:04.365 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.365 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:04.365 malloc0 00:15:04.365 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.365 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:04.365 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.365 10:32:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:04.365 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.365 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:04.365 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.365 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:04.365 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.365 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:04.365 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:04.365 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:04.365 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:04.365 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:04.365 10:32:45 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:15:36.468 Fuzzing completed. Shutting down the fuzz application 00:15:36.468 00:15:36.468 Dumping successful admin opcodes: 00:15:36.468 8, 9, 10, 24, 00:15:36.468 Dumping successful io opcodes: 00:15:36.468 0, 00:15:36.468 NS: 0x20000081ef00 I/O qp, Total commands completed: 1013878, total successful commands: 3978, random_seed: 1126171584 00:15:36.468 NS: 0x20000081ef00 admin qp, Total commands completed: 249926, total successful commands: 2020, random_seed: 3401579776 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 3197216 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # '[' -z 3197216 ']' 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # kill -0 3197216 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # uname 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3197216 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3197216' 00:15:36.468 killing process with pid 3197216 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@973 -- # kill 3197216 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@978 -- # wait 3197216 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:15:36.468 00:15:36.468 real 0m32.207s 00:15:36.468 user 0m29.823s 00:15:36.468 sys 0m31.540s 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:36.468 ************************************ 00:15:36.468 END TEST nvmf_vfio_user_fuzz 00:15:36.468 ************************************ 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:36.468 ************************************ 00:15:36.468 START TEST nvmf_auth_target 00:15:36.468 ************************************ 00:15:36.468 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:15:36.468 * Looking for test storage... 00:15:36.468 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lcov --version 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # IFS=.-: 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@336 -- # read -ra ver1 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # IFS=.-: 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@337 -- # read -ra ver2 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@338 -- # local 'op=<' 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@340 -- # ver1_l=2 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@341 -- # ver2_l=1 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@344 -- # case "$op" in 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@345 -- # : 1 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # decimal 1 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=1 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 1 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@365 -- # ver1[v]=1 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # decimal 2 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@353 -- # local d=2 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@355 -- # echo 2 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@366 -- # ver2[v]=2 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@368 -- # return 0 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:15:36.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.469 --rc genhtml_branch_coverage=1 00:15:36.469 --rc genhtml_function_coverage=1 00:15:36.469 --rc genhtml_legend=1 00:15:36.469 --rc geninfo_all_blocks=1 00:15:36.469 --rc geninfo_unexecuted_blocks=1 00:15:36.469 00:15:36.469 ' 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:15:36.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.469 --rc genhtml_branch_coverage=1 00:15:36.469 --rc genhtml_function_coverage=1 00:15:36.469 --rc genhtml_legend=1 00:15:36.469 --rc geninfo_all_blocks=1 00:15:36.469 --rc geninfo_unexecuted_blocks=1 00:15:36.469 00:15:36.469 ' 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:15:36.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.469 --rc genhtml_branch_coverage=1 00:15:36.469 --rc genhtml_function_coverage=1 00:15:36.469 --rc genhtml_legend=1 00:15:36.469 --rc geninfo_all_blocks=1 00:15:36.469 --rc geninfo_unexecuted_blocks=1 00:15:36.469 00:15:36.469 ' 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:15:36.469 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:36.469 --rc genhtml_branch_coverage=1 00:15:36.469 --rc genhtml_function_coverage=1 00:15:36.469 --rc genhtml_legend=1 00:15:36.469 --rc geninfo_all_blocks=1 00:15:36.469 --rc geninfo_unexecuted_blocks=1 00:15:36.469 00:15:36.469 ' 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@15 -- # shopt -s extglob 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.469 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@50 -- # : 0 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:15:36.470 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # nvmftestinit 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # remove_target_ns 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # xtrace_disable 00:15:36.470 10:33:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.745 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:15:41.745 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@131 -- # pci_devs=() 00:15:41.745 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:15:41.745 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:15:41.745 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:15:41.745 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:15:41.745 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:15:41.745 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@135 -- # net_devs=() 00:15:41.745 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:15:41.745 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@136 -- # e810=() 00:15:41.745 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@136 -- # local -ga e810 00:15:41.745 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@137 -- # x722=() 00:15:41.745 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@137 -- # local -ga x722 00:15:41.745 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@138 -- # mlx=() 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@138 -- # local -ga mlx 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:15:41.746 Found 0000:86:00.0 (0x8086 - 0x159b) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:15:41.746 Found 0000:86:00.1 (0x8086 - 0x159b) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:15:41.746 Found net devices under 0000:86:00.0: cvl_0_0 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:15:41.746 Found net devices under 0000:86:00.1: cvl_0_1 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@262 -- # is_hw=yes 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@257 -- # create_target_ns 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@28 -- # local -g _dev 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # ips=() 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772161 00:15:41.746 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:15:41.747 10.0.0.1 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@11 -- # local val=167772162 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:15:41.747 10.0.0.2 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:15:41.747 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:15:41.747 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.225 ms 00:15:41.747 00:15:41.747 --- 10.0.0.1 ping statistics --- 00:15:41.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.747 rtt min/avg/max/mdev = 0.225/0.225/0.225/0.000 ms 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target0 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:15:41.747 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:15:41.747 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:15:41.747 00:15:41.747 --- 10.0.0.2 ping statistics --- 00:15:41.747 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:15:41.747 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # return 0 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:15:41.747 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # return 1 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev= 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@169 -- # return 0 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:41.748 10:33:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target0 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@107 -- # local dev=target1 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@109 -- # return 1 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@168 -- # dev= 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@169 -- # return 0 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@87 -- # nvmfappstart -L nvmf_auth 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=3206062 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 3206062 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3206062 ']' 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@89 -- # hostpid=3206189 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key null 48 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=null 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=9aadb89685c587a99bdc20872831e8a2f1cc42c95bd2b157 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.554 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 9aadb89685c587a99bdc20872831e8a2f1cc42c95bd2b157 0 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 9aadb89685c587a99bdc20872831e8a2f1cc42c95bd2b157 0 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=9aadb89685c587a99bdc20872831e8a2f1cc42c95bd2b157 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=0 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.554 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.554 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # keys[0]=/tmp/spdk.key-null.554 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # gen_dhchap_key sha512 64 00:15:41.748 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:15:41.749 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:41.749 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:15:41.749 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:15:41.749 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:15:41.749 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:41.749 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=365997f6e6fe7db09dd943eadbd87c46643bcbbdaaf4d6f569e7e0357a69271f 00:15:41.749 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:15:41.749 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.Qot 00:15:41.749 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 365997f6e6fe7db09dd943eadbd87c46643bcbbdaaf4d6f569e7e0357a69271f 3 00:15:41.749 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 365997f6e6fe7db09dd943eadbd87c46643bcbbdaaf4d6f569e7e0357a69271f 3 00:15:41.749 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:15:41.749 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:15:41.749 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=365997f6e6fe7db09dd943eadbd87c46643bcbbdaaf4d6f569e7e0357a69271f 00:15:41.749 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:15:41.749 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:15:41.749 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.Qot 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.Qot 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # ckeys[0]=/tmp/spdk.key-sha512.Qot 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha256 32 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=4e3820f2b39f9f05622007a98d1012af 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.blh 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 4e3820f2b39f9f05622007a98d1012af 1 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 4e3820f2b39f9f05622007a98d1012af 1 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=4e3820f2b39f9f05622007a98d1012af 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.blh 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.blh 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # keys[1]=/tmp/spdk.key-sha256.blh 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # gen_dhchap_key sha384 48 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=476a3e54bd3a8e40846718b6cc81b678874e70b3711a8bbe 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.axg 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 476a3e54bd3a8e40846718b6cc81b678874e70b3711a8bbe 2 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 476a3e54bd3a8e40846718b6cc81b678874e70b3711a8bbe 2 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=476a3e54bd3a8e40846718b6cc81b678874e70b3711a8bbe 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.axg 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.axg 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@95 -- # ckeys[1]=/tmp/spdk.key-sha384.axg 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha384 48 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha384 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=48 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=364c84fa94aa1328a33bdb4992addf0645f80c9abb7da576 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.CZb 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key 364c84fa94aa1328a33bdb4992addf0645f80c9abb7da576 2 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 364c84fa94aa1328a33bdb4992addf0645f80c9abb7da576 2 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=364c84fa94aa1328a33bdb4992addf0645f80c9abb7da576 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=2 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.CZb 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.CZb 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # keys[2]=/tmp/spdk.key-sha384.CZb 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # gen_dhchap_key sha256 32 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha256 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=32 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=a32b7f2ec95209c7baf2c77b8ff2947a 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.RKP 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key a32b7f2ec95209c7baf2c77b8ff2947a 1 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 a32b7f2ec95209c7baf2c77b8ff2947a 1 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=a32b7f2ec95209c7baf2c77b8ff2947a 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=1 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:15:42.009 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.RKP 00:15:42.010 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.RKP 00:15:42.010 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # ckeys[2]=/tmp/spdk.key-sha256.RKP 00:15:42.010 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # gen_dhchap_key sha512 64 00:15:42.010 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@525 -- # local digest len file key 00:15:42.010 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:15:42.010 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@526 -- # local -A digests 00:15:42.010 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # digest=sha512 00:15:42.010 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@528 -- # len=64 00:15:42.010 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:15:42.010 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@529 -- # key=c274c9ccab6065c10955edfe8edc1f95dbc9684d1b8d466a8f0c9b922a06d991 00:15:42.010 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:15:42.010 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.gQ9 00:15:42.010 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@531 -- # format_dhchap_key c274c9ccab6065c10955edfe8edc1f95dbc9684d1b8d466a8f0c9b922a06d991 3 00:15:42.010 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@521 -- # format_key DHHC-1 c274c9ccab6065c10955edfe8edc1f95dbc9684d1b8d466a8f0c9b922a06d991 3 00:15:42.010 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@504 -- # local prefix key digest 00:15:42.010 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:15:42.010 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # key=c274c9ccab6065c10955edfe8edc1f95dbc9684d1b8d466a8f0c9b922a06d991 00:15:42.010 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@506 -- # digest=3 00:15:42.010 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@507 -- # python - 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.gQ9 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.gQ9 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # keys[3]=/tmp/spdk.key-sha512.gQ9 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@97 -- # ckeys[3]= 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@99 -- # waitforlisten 3206062 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3206062 ']' 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:42.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@100 -- # waitforlisten 3206189 /var/tmp/host.sock 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3206189 ']' 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/host.sock 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:15:42.269 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:42.269 10:33:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.527 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:42.527 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:15:42.527 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@101 -- # rpc_cmd 00:15:42.527 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.527 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.527 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.527 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:42.527 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.554 00:15:42.527 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.527 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.527 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.527 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.554 00:15:42.527 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.554 00:15:42.786 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha512.Qot ]] 00:15:42.786 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Qot 00:15:42.786 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:42.786 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:42.786 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:42.786 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Qot 00:15:42.786 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Qot 00:15:43.044 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:43.044 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.blh 00:15:43.044 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.044 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.044 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.044 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.blh 00:15:43.044 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.blh 00:15:43.303 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha384.axg ]] 00:15:43.303 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.axg 00:15:43.303 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.303 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.303 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.303 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.axg 00:15:43.303 10:33:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.axg 00:15:43.303 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:43.303 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.CZb 00:15:43.303 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.303 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.303 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.303 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.CZb 00:15:43.303 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.CZb 00:15:43.580 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n /tmp/spdk.key-sha256.RKP ]] 00:15:43.580 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@112 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RKP 00:15:43.580 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.580 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.580 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.580 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@113 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RKP 00:15:43.580 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RKP 00:15:43.838 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@108 -- # for i in "${!keys[@]}" 00:15:43.838 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@109 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gQ9 00:15:43.838 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:43.838 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:43.838 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:43.838 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@110 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.gQ9 00:15:43.838 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.gQ9 00:15:44.096 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@111 -- # [[ -n '' ]] 00:15:44.096 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:15:44.096 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:44.096 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:44.096 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:44.096 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:44.096 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 0 00:15:44.096 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:44.096 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:44.096 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:44.096 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:44.096 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:44.096 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.096 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.096 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.096 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.096 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.096 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.096 10:33:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:44.354 00:15:44.354 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:44.354 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:44.354 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:44.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:44.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:44.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:44.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:44.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:44.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:44.613 { 00:15:44.613 "cntlid": 1, 00:15:44.613 "qid": 0, 00:15:44.613 "state": "enabled", 00:15:44.613 "thread": "nvmf_tgt_poll_group_000", 00:15:44.613 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:44.613 "listen_address": { 00:15:44.613 "trtype": "TCP", 00:15:44.613 "adrfam": "IPv4", 00:15:44.613 "traddr": "10.0.0.2", 00:15:44.613 "trsvcid": "4420" 00:15:44.613 }, 00:15:44.613 "peer_address": { 00:15:44.613 "trtype": "TCP", 00:15:44.613 "adrfam": "IPv4", 00:15:44.613 "traddr": "10.0.0.1", 00:15:44.613 "trsvcid": "60808" 00:15:44.613 }, 00:15:44.613 "auth": { 00:15:44.613 "state": "completed", 00:15:44.613 "digest": "sha256", 00:15:44.613 "dhgroup": "null" 00:15:44.613 } 00:15:44.613 } 00:15:44.613 ]' 00:15:44.613 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:44.871 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:44.871 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:44.871 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:44.871 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:44.871 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:44.871 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:44.871 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:45.129 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:15:45.129 10:33:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:45.696 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 1 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.696 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:45.955 00:15:46.214 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:46.214 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:46.214 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:46.214 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:46.214 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:46.214 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:46.214 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:46.214 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:46.214 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:46.214 { 00:15:46.214 "cntlid": 3, 00:15:46.214 "qid": 0, 00:15:46.214 "state": "enabled", 00:15:46.214 "thread": "nvmf_tgt_poll_group_000", 00:15:46.214 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:46.214 "listen_address": { 00:15:46.214 "trtype": "TCP", 00:15:46.214 "adrfam": "IPv4", 00:15:46.214 "traddr": "10.0.0.2", 00:15:46.214 "trsvcid": "4420" 00:15:46.214 }, 00:15:46.214 "peer_address": { 00:15:46.214 "trtype": "TCP", 00:15:46.214 "adrfam": "IPv4", 00:15:46.214 "traddr": "10.0.0.1", 00:15:46.214 "trsvcid": "60836" 00:15:46.214 }, 00:15:46.214 "auth": { 00:15:46.214 "state": "completed", 00:15:46.214 "digest": "sha256", 00:15:46.214 "dhgroup": "null" 00:15:46.214 } 00:15:46.214 } 00:15:46.214 ]' 00:15:46.214 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:46.214 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:46.214 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:46.473 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:46.473 10:33:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:46.473 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:46.473 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:46.473 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:46.731 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:15:46.731 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:15:47.297 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:47.297 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:47.297 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:47.297 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.297 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.297 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.297 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:47.297 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:47.297 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:47.297 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 2 00:15:47.297 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:47.297 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:47.297 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:47.297 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:47.297 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:47.297 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.297 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.297 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.298 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.298 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.298 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.298 10:33:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:47.556 00:15:47.556 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:47.556 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:47.556 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:47.814 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:47.814 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:47.814 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:47.815 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:47.815 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:47.815 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:47.815 { 00:15:47.815 "cntlid": 5, 00:15:47.815 "qid": 0, 00:15:47.815 "state": "enabled", 00:15:47.815 "thread": "nvmf_tgt_poll_group_000", 00:15:47.815 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:47.815 "listen_address": { 00:15:47.815 "trtype": "TCP", 00:15:47.815 "adrfam": "IPv4", 00:15:47.815 "traddr": "10.0.0.2", 00:15:47.815 "trsvcid": "4420" 00:15:47.815 }, 00:15:47.815 "peer_address": { 00:15:47.815 "trtype": "TCP", 00:15:47.815 "adrfam": "IPv4", 00:15:47.815 "traddr": "10.0.0.1", 00:15:47.815 "trsvcid": "60864" 00:15:47.815 }, 00:15:47.815 "auth": { 00:15:47.815 "state": "completed", 00:15:47.815 "digest": "sha256", 00:15:47.815 "dhgroup": "null" 00:15:47.815 } 00:15:47.815 } 00:15:47.815 ]' 00:15:47.815 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:47.815 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:47.815 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:47.815 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:47.815 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:48.073 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:48.074 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:48.074 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:48.074 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:15:48.074 10:33:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:15:48.642 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:48.642 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:48.642 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:48.642 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.642 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.642 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.642 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:48.642 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:48.642 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:15:48.902 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 null 3 00:15:48.902 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:48.902 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:48.902 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:15:48.902 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:48.902 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:48.902 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:48.902 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:48.902 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:48.902 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:48.902 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:48.902 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:48.902 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:49.161 00:15:49.161 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:49.161 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:49.161 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:49.419 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:49.419 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:49.419 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:49.419 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:49.419 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:49.419 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:49.419 { 00:15:49.419 "cntlid": 7, 00:15:49.419 "qid": 0, 00:15:49.419 "state": "enabled", 00:15:49.419 "thread": "nvmf_tgt_poll_group_000", 00:15:49.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:49.419 "listen_address": { 00:15:49.419 "trtype": "TCP", 00:15:49.419 "adrfam": "IPv4", 00:15:49.419 "traddr": "10.0.0.2", 00:15:49.419 "trsvcid": "4420" 00:15:49.419 }, 00:15:49.419 "peer_address": { 00:15:49.419 "trtype": "TCP", 00:15:49.419 "adrfam": "IPv4", 00:15:49.419 "traddr": "10.0.0.1", 00:15:49.419 "trsvcid": "40872" 00:15:49.419 }, 00:15:49.419 "auth": { 00:15:49.419 "state": "completed", 00:15:49.419 "digest": "sha256", 00:15:49.419 "dhgroup": "null" 00:15:49.419 } 00:15:49.419 } 00:15:49.419 ]' 00:15:49.419 10:33:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:49.419 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:49.419 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:49.419 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:15:49.419 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:49.419 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:49.419 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:49.419 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:49.683 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:15:49.683 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:15:50.248 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:50.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:50.248 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:50.248 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.248 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.248 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.248 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:50.248 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:50.248 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:50.248 10:33:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:50.507 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 0 00:15:50.507 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:50.507 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:50.507 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:50.507 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:50.507 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:50.507 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.507 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:50.507 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:50.507 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:50.507 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.507 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.507 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:50.765 00:15:50.766 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:50.766 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:50.766 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:51.024 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:51.024 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:51.024 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.024 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.024 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.024 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:51.024 { 00:15:51.024 "cntlid": 9, 00:15:51.024 "qid": 0, 00:15:51.024 "state": "enabled", 00:15:51.024 "thread": "nvmf_tgt_poll_group_000", 00:15:51.024 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:51.024 "listen_address": { 00:15:51.024 "trtype": "TCP", 00:15:51.024 "adrfam": "IPv4", 00:15:51.024 "traddr": "10.0.0.2", 00:15:51.024 "trsvcid": "4420" 00:15:51.024 }, 00:15:51.024 "peer_address": { 00:15:51.024 "trtype": "TCP", 00:15:51.024 "adrfam": "IPv4", 00:15:51.024 "traddr": "10.0.0.1", 00:15:51.024 "trsvcid": "40900" 00:15:51.024 }, 00:15:51.024 "auth": { 00:15:51.024 "state": "completed", 00:15:51.024 "digest": "sha256", 00:15:51.024 "dhgroup": "ffdhe2048" 00:15:51.024 } 00:15:51.024 } 00:15:51.024 ]' 00:15:51.024 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:51.024 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:51.024 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:51.024 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:51.024 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:51.024 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:51.024 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:51.024 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:51.283 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:15:51.283 10:33:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:15:51.850 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:51.850 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:51.850 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:51.850 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:51.850 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:51.850 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:51.850 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:51.850 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:51.850 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:52.108 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 1 00:15:52.108 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:52.108 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:52.108 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:52.108 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:52.108 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:52.108 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.108 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.108 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.108 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.108 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.108 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.108 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:52.366 00:15:52.366 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:52.366 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:52.366 10:33:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:52.624 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:52.624 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:52.624 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:52.624 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:52.624 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:52.624 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:52.624 { 00:15:52.624 "cntlid": 11, 00:15:52.624 "qid": 0, 00:15:52.624 "state": "enabled", 00:15:52.624 "thread": "nvmf_tgt_poll_group_000", 00:15:52.624 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:52.624 "listen_address": { 00:15:52.624 "trtype": "TCP", 00:15:52.624 "adrfam": "IPv4", 00:15:52.624 "traddr": "10.0.0.2", 00:15:52.624 "trsvcid": "4420" 00:15:52.624 }, 00:15:52.624 "peer_address": { 00:15:52.624 "trtype": "TCP", 00:15:52.624 "adrfam": "IPv4", 00:15:52.624 "traddr": "10.0.0.1", 00:15:52.624 "trsvcid": "40924" 00:15:52.624 }, 00:15:52.624 "auth": { 00:15:52.624 "state": "completed", 00:15:52.624 "digest": "sha256", 00:15:52.624 "dhgroup": "ffdhe2048" 00:15:52.624 } 00:15:52.624 } 00:15:52.624 ]' 00:15:52.624 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:52.624 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:52.624 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:52.624 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:52.624 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:52.624 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:52.624 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:52.624 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:52.882 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:15:52.882 10:33:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:15:53.449 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:53.449 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:53.449 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:53.449 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.449 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.449 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.449 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:53.449 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:53.449 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:53.708 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 2 00:15:53.708 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:53.708 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:53.708 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:53.708 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:53.708 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:53.709 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.709 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:53.709 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:53.709 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:53.709 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.709 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.709 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:53.967 00:15:53.967 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:53.967 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:53.967 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:54.225 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:54.225 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:54.225 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:54.225 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:54.225 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:54.225 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:54.225 { 00:15:54.225 "cntlid": 13, 00:15:54.225 "qid": 0, 00:15:54.225 "state": "enabled", 00:15:54.225 "thread": "nvmf_tgt_poll_group_000", 00:15:54.225 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:54.225 "listen_address": { 00:15:54.225 "trtype": "TCP", 00:15:54.225 "adrfam": "IPv4", 00:15:54.225 "traddr": "10.0.0.2", 00:15:54.225 "trsvcid": "4420" 00:15:54.225 }, 00:15:54.225 "peer_address": { 00:15:54.225 "trtype": "TCP", 00:15:54.225 "adrfam": "IPv4", 00:15:54.225 "traddr": "10.0.0.1", 00:15:54.225 "trsvcid": "40938" 00:15:54.225 }, 00:15:54.225 "auth": { 00:15:54.225 "state": "completed", 00:15:54.225 "digest": "sha256", 00:15:54.225 "dhgroup": "ffdhe2048" 00:15:54.225 } 00:15:54.225 } 00:15:54.225 ]' 00:15:54.225 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:54.225 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:54.225 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:54.225 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:54.225 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:54.225 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:54.225 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:54.225 10:33:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:54.484 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:15:54.484 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:15:55.051 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:55.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:55.051 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:55.051 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.051 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.051 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.051 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:55.051 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:55.051 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:15:55.309 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe2048 3 00:15:55.309 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:55.309 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:55.309 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:15:55.309 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:15:55.309 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:55.309 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:15:55.309 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.309 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.309 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.309 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:15:55.309 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.309 10:33:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:15:55.567 00:15:55.567 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:55.567 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:55.567 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:55.825 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:55.825 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:55.825 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:55.825 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:55.825 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:55.825 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:55.825 { 00:15:55.825 "cntlid": 15, 00:15:55.825 "qid": 0, 00:15:55.825 "state": "enabled", 00:15:55.825 "thread": "nvmf_tgt_poll_group_000", 00:15:55.825 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:55.825 "listen_address": { 00:15:55.825 "trtype": "TCP", 00:15:55.825 "adrfam": "IPv4", 00:15:55.825 "traddr": "10.0.0.2", 00:15:55.825 "trsvcid": "4420" 00:15:55.825 }, 00:15:55.825 "peer_address": { 00:15:55.825 "trtype": "TCP", 00:15:55.825 "adrfam": "IPv4", 00:15:55.825 "traddr": "10.0.0.1", 00:15:55.825 "trsvcid": "40968" 00:15:55.825 }, 00:15:55.825 "auth": { 00:15:55.825 "state": "completed", 00:15:55.825 "digest": "sha256", 00:15:55.825 "dhgroup": "ffdhe2048" 00:15:55.825 } 00:15:55.825 } 00:15:55.825 ]' 00:15:55.825 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:55.825 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:55.825 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:55.826 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:15:55.826 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:55.826 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:55.826 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:55.826 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:56.084 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:15:56.084 10:33:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:15:56.651 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:56.651 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:56.651 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:56.651 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.651 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.651 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.651 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:15:56.651 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:56.651 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:56.651 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:56.909 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 0 00:15:56.909 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:56.909 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:56.909 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:56.909 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:15:56.909 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:56.909 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.909 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:56.909 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:56.909 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:56.909 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.910 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:56.910 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:15:57.168 00:15:57.168 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:57.168 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:57.168 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:57.168 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:57.168 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:57.168 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:57.168 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:57.168 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:57.168 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:57.168 { 00:15:57.168 "cntlid": 17, 00:15:57.168 "qid": 0, 00:15:57.168 "state": "enabled", 00:15:57.168 "thread": "nvmf_tgt_poll_group_000", 00:15:57.168 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:57.168 "listen_address": { 00:15:57.168 "trtype": "TCP", 00:15:57.168 "adrfam": "IPv4", 00:15:57.168 "traddr": "10.0.0.2", 00:15:57.168 "trsvcid": "4420" 00:15:57.168 }, 00:15:57.168 "peer_address": { 00:15:57.168 "trtype": "TCP", 00:15:57.168 "adrfam": "IPv4", 00:15:57.168 "traddr": "10.0.0.1", 00:15:57.168 "trsvcid": "41000" 00:15:57.168 }, 00:15:57.168 "auth": { 00:15:57.168 "state": "completed", 00:15:57.168 "digest": "sha256", 00:15:57.168 "dhgroup": "ffdhe3072" 00:15:57.168 } 00:15:57.168 } 00:15:57.168 ]' 00:15:57.168 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:57.426 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:57.426 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:57.426 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:57.426 10:33:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:57.426 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:57.426 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:57.426 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:57.684 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:15:57.685 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:58.253 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 1 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.253 10:33:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:15:58.512 00:15:58.772 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:15:58.772 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:15:58.772 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:15:58.772 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:15:58.772 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:15:58.772 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:58.772 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:58.772 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:58.772 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:15:58.772 { 00:15:58.772 "cntlid": 19, 00:15:58.772 "qid": 0, 00:15:58.772 "state": "enabled", 00:15:58.772 "thread": "nvmf_tgt_poll_group_000", 00:15:58.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:15:58.772 "listen_address": { 00:15:58.772 "trtype": "TCP", 00:15:58.772 "adrfam": "IPv4", 00:15:58.772 "traddr": "10.0.0.2", 00:15:58.772 "trsvcid": "4420" 00:15:58.772 }, 00:15:58.772 "peer_address": { 00:15:58.772 "trtype": "TCP", 00:15:58.772 "adrfam": "IPv4", 00:15:58.772 "traddr": "10.0.0.1", 00:15:58.772 "trsvcid": "41044" 00:15:58.772 }, 00:15:58.772 "auth": { 00:15:58.772 "state": "completed", 00:15:58.772 "digest": "sha256", 00:15:58.772 "dhgroup": "ffdhe3072" 00:15:58.772 } 00:15:58.772 } 00:15:58.772 ]' 00:15:58.772 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:15:59.030 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:15:59.030 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:15:59.030 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:15:59.030 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:15:59.030 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:15:59.031 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:15:59.031 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:15:59.289 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:15:59.289 10:33:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:15:59.855 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:15:59.855 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:15:59.855 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:15:59.855 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.855 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:15:59.855 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:59.855 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:15:59.855 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:59.855 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:15:59.855 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 2 00:15:59.855 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:15:59.855 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:15:59.855 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:15:59.855 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:15:59.855 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:15:59.855 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:15:59.855 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:59.855 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.113 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.113 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.113 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.113 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:00.113 00:16:00.371 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:00.371 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:00.371 10:33:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:00.371 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:00.371 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:00.371 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:00.371 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:00.371 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:00.371 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:00.371 { 00:16:00.371 "cntlid": 21, 00:16:00.371 "qid": 0, 00:16:00.371 "state": "enabled", 00:16:00.371 "thread": "nvmf_tgt_poll_group_000", 00:16:00.371 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:00.371 "listen_address": { 00:16:00.371 "trtype": "TCP", 00:16:00.371 "adrfam": "IPv4", 00:16:00.371 "traddr": "10.0.0.2", 00:16:00.371 "trsvcid": "4420" 00:16:00.371 }, 00:16:00.371 "peer_address": { 00:16:00.371 "trtype": "TCP", 00:16:00.371 "adrfam": "IPv4", 00:16:00.371 "traddr": "10.0.0.1", 00:16:00.371 "trsvcid": "45898" 00:16:00.371 }, 00:16:00.371 "auth": { 00:16:00.371 "state": "completed", 00:16:00.371 "digest": "sha256", 00:16:00.371 "dhgroup": "ffdhe3072" 00:16:00.371 } 00:16:00.371 } 00:16:00.371 ]' 00:16:00.371 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:00.629 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:00.629 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:00.629 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:00.629 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:00.629 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:00.629 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:00.629 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:00.887 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:16:00.887 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:16:01.453 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:01.453 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:01.453 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:01.453 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.453 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.453 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.453 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:01.453 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:01.453 10:33:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:01.453 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe3072 3 00:16:01.453 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:01.453 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:01.453 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:01.453 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:01.453 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:01.453 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:01.453 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.453 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.453 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.711 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:01.711 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.711 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:01.711 00:16:01.969 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:01.969 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:01.969 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:01.969 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:01.969 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:01.969 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:01.969 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:01.970 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:01.970 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:01.970 { 00:16:01.970 "cntlid": 23, 00:16:01.970 "qid": 0, 00:16:01.970 "state": "enabled", 00:16:01.970 "thread": "nvmf_tgt_poll_group_000", 00:16:01.970 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:01.970 "listen_address": { 00:16:01.970 "trtype": "TCP", 00:16:01.970 "adrfam": "IPv4", 00:16:01.970 "traddr": "10.0.0.2", 00:16:01.970 "trsvcid": "4420" 00:16:01.970 }, 00:16:01.970 "peer_address": { 00:16:01.970 "trtype": "TCP", 00:16:01.970 "adrfam": "IPv4", 00:16:01.970 "traddr": "10.0.0.1", 00:16:01.970 "trsvcid": "45926" 00:16:01.970 }, 00:16:01.970 "auth": { 00:16:01.970 "state": "completed", 00:16:01.970 "digest": "sha256", 00:16:01.970 "dhgroup": "ffdhe3072" 00:16:01.970 } 00:16:01.970 } 00:16:01.970 ]' 00:16:01.970 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:02.228 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:02.228 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:02.228 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:02.228 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:02.228 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:02.228 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:02.228 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:02.485 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:16:02.485 10:33:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:03.051 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 0 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.051 10:33:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:03.309 00:16:03.309 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:03.309 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:03.309 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:03.568 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:03.568 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:03.568 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:03.568 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:03.568 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:03.568 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:03.568 { 00:16:03.568 "cntlid": 25, 00:16:03.568 "qid": 0, 00:16:03.568 "state": "enabled", 00:16:03.568 "thread": "nvmf_tgt_poll_group_000", 00:16:03.568 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:03.568 "listen_address": { 00:16:03.568 "trtype": "TCP", 00:16:03.568 "adrfam": "IPv4", 00:16:03.568 "traddr": "10.0.0.2", 00:16:03.568 "trsvcid": "4420" 00:16:03.568 }, 00:16:03.568 "peer_address": { 00:16:03.568 "trtype": "TCP", 00:16:03.568 "adrfam": "IPv4", 00:16:03.568 "traddr": "10.0.0.1", 00:16:03.568 "trsvcid": "45954" 00:16:03.568 }, 00:16:03.568 "auth": { 00:16:03.568 "state": "completed", 00:16:03.568 "digest": "sha256", 00:16:03.568 "dhgroup": "ffdhe4096" 00:16:03.568 } 00:16:03.568 } 00:16:03.568 ]' 00:16:03.568 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:03.568 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:03.568 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:03.827 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:03.827 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:03.827 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:03.827 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:03.827 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:04.084 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:16:04.084 10:33:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:16:04.650 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:04.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:04.650 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:04.650 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.650 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.650 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.650 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:04.650 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:04.650 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:04.909 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 1 00:16:04.909 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:04.909 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:04.909 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:04.909 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:04.909 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:04.909 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.909 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:04.909 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:04.909 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:04.909 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.909 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:04.909 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:05.167 00:16:05.167 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:05.167 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:05.167 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:05.167 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:05.167 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:05.167 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:05.167 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:05.167 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:05.167 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:05.167 { 00:16:05.167 "cntlid": 27, 00:16:05.167 "qid": 0, 00:16:05.167 "state": "enabled", 00:16:05.167 "thread": "nvmf_tgt_poll_group_000", 00:16:05.167 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:05.167 "listen_address": { 00:16:05.167 "trtype": "TCP", 00:16:05.167 "adrfam": "IPv4", 00:16:05.167 "traddr": "10.0.0.2", 00:16:05.167 "trsvcid": "4420" 00:16:05.167 }, 00:16:05.167 "peer_address": { 00:16:05.167 "trtype": "TCP", 00:16:05.167 "adrfam": "IPv4", 00:16:05.167 "traddr": "10.0.0.1", 00:16:05.167 "trsvcid": "45972" 00:16:05.167 }, 00:16:05.167 "auth": { 00:16:05.167 "state": "completed", 00:16:05.167 "digest": "sha256", 00:16:05.167 "dhgroup": "ffdhe4096" 00:16:05.167 } 00:16:05.167 } 00:16:05.167 ]' 00:16:05.167 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:05.426 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:05.426 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:05.426 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:05.426 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:05.426 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:05.426 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:05.426 10:33:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:05.684 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:16:05.684 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:06.250 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 2 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.250 10:33:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:06.516 00:16:06.516 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:06.516 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:06.516 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:06.809 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:06.809 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:06.809 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:06.809 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:06.809 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:06.809 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:06.809 { 00:16:06.809 "cntlid": 29, 00:16:06.809 "qid": 0, 00:16:06.809 "state": "enabled", 00:16:06.809 "thread": "nvmf_tgt_poll_group_000", 00:16:06.809 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:06.809 "listen_address": { 00:16:06.809 "trtype": "TCP", 00:16:06.809 "adrfam": "IPv4", 00:16:06.809 "traddr": "10.0.0.2", 00:16:06.809 "trsvcid": "4420" 00:16:06.809 }, 00:16:06.809 "peer_address": { 00:16:06.809 "trtype": "TCP", 00:16:06.809 "adrfam": "IPv4", 00:16:06.809 "traddr": "10.0.0.1", 00:16:06.809 "trsvcid": "45994" 00:16:06.809 }, 00:16:06.809 "auth": { 00:16:06.809 "state": "completed", 00:16:06.809 "digest": "sha256", 00:16:06.809 "dhgroup": "ffdhe4096" 00:16:06.809 } 00:16:06.809 } 00:16:06.809 ]' 00:16:06.809 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:06.809 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:07.108 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:07.109 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:07.109 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:07.109 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:07.109 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:07.109 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:07.109 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:16:07.109 10:33:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:16:07.676 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:07.676 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:07.676 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:07.676 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.676 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.676 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.676 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:07.676 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:07.676 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:07.935 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe4096 3 00:16:07.935 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:07.935 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:07.935 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:07.935 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:07.935 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:07.935 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:07.935 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:07.935 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:07.935 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:07.935 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:07.935 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:07.935 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:08.193 00:16:08.193 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:08.193 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:08.193 10:33:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:08.451 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:08.451 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:08.451 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:08.451 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:08.451 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:08.451 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:08.451 { 00:16:08.451 "cntlid": 31, 00:16:08.451 "qid": 0, 00:16:08.451 "state": "enabled", 00:16:08.451 "thread": "nvmf_tgt_poll_group_000", 00:16:08.451 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:08.451 "listen_address": { 00:16:08.451 "trtype": "TCP", 00:16:08.451 "adrfam": "IPv4", 00:16:08.451 "traddr": "10.0.0.2", 00:16:08.451 "trsvcid": "4420" 00:16:08.451 }, 00:16:08.451 "peer_address": { 00:16:08.451 "trtype": "TCP", 00:16:08.451 "adrfam": "IPv4", 00:16:08.451 "traddr": "10.0.0.1", 00:16:08.451 "trsvcid": "46018" 00:16:08.451 }, 00:16:08.451 "auth": { 00:16:08.451 "state": "completed", 00:16:08.451 "digest": "sha256", 00:16:08.451 "dhgroup": "ffdhe4096" 00:16:08.451 } 00:16:08.451 } 00:16:08.451 ]' 00:16:08.451 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:08.451 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:08.451 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:08.451 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:08.451 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:08.451 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:08.451 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:08.451 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:08.709 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:16:08.709 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:16:09.275 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:09.275 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:09.275 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:09.275 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.275 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.275 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.275 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:09.275 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:09.275 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:09.275 10:33:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:09.534 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 0 00:16:09.534 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:09.534 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:09.534 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:09.534 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:09.534 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:09.534 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.534 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:09.534 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:09.534 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:09.534 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.534 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.534 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:09.792 00:16:10.050 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:10.050 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:10.050 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:10.050 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:10.050 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:10.050 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:10.050 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:10.050 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:10.050 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:10.050 { 00:16:10.050 "cntlid": 33, 00:16:10.050 "qid": 0, 00:16:10.050 "state": "enabled", 00:16:10.050 "thread": "nvmf_tgt_poll_group_000", 00:16:10.050 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:10.050 "listen_address": { 00:16:10.050 "trtype": "TCP", 00:16:10.050 "adrfam": "IPv4", 00:16:10.050 "traddr": "10.0.0.2", 00:16:10.050 "trsvcid": "4420" 00:16:10.050 }, 00:16:10.050 "peer_address": { 00:16:10.050 "trtype": "TCP", 00:16:10.050 "adrfam": "IPv4", 00:16:10.050 "traddr": "10.0.0.1", 00:16:10.050 "trsvcid": "37402" 00:16:10.050 }, 00:16:10.050 "auth": { 00:16:10.050 "state": "completed", 00:16:10.050 "digest": "sha256", 00:16:10.050 "dhgroup": "ffdhe6144" 00:16:10.050 } 00:16:10.050 } 00:16:10.050 ]' 00:16:10.050 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:10.050 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:10.050 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:10.308 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:10.308 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:10.308 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:10.308 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:10.308 10:33:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:10.566 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:16:10.566 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:11.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 1 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.143 10:33:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:11.710 00:16:11.710 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:11.710 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:11.710 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:11.710 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:11.710 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:11.710 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:11.710 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:11.710 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:11.710 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:11.710 { 00:16:11.710 "cntlid": 35, 00:16:11.710 "qid": 0, 00:16:11.710 "state": "enabled", 00:16:11.710 "thread": "nvmf_tgt_poll_group_000", 00:16:11.710 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:11.710 "listen_address": { 00:16:11.710 "trtype": "TCP", 00:16:11.710 "adrfam": "IPv4", 00:16:11.710 "traddr": "10.0.0.2", 00:16:11.710 "trsvcid": "4420" 00:16:11.710 }, 00:16:11.710 "peer_address": { 00:16:11.710 "trtype": "TCP", 00:16:11.710 "adrfam": "IPv4", 00:16:11.710 "traddr": "10.0.0.1", 00:16:11.710 "trsvcid": "37420" 00:16:11.710 }, 00:16:11.710 "auth": { 00:16:11.710 "state": "completed", 00:16:11.710 "digest": "sha256", 00:16:11.710 "dhgroup": "ffdhe6144" 00:16:11.710 } 00:16:11.710 } 00:16:11.710 ]' 00:16:11.710 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:11.710 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:11.710 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:11.968 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:11.969 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:11.969 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:11.969 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:11.969 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:11.969 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:16:11.969 10:33:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:16:12.534 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:12.793 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 2 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:12.793 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:13.358 00:16:13.358 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:13.358 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:13.358 10:33:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:13.358 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:13.358 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:13.358 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:13.358 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:13.358 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:13.358 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:13.358 { 00:16:13.358 "cntlid": 37, 00:16:13.358 "qid": 0, 00:16:13.358 "state": "enabled", 00:16:13.358 "thread": "nvmf_tgt_poll_group_000", 00:16:13.358 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:13.358 "listen_address": { 00:16:13.358 "trtype": "TCP", 00:16:13.358 "adrfam": "IPv4", 00:16:13.358 "traddr": "10.0.0.2", 00:16:13.358 "trsvcid": "4420" 00:16:13.358 }, 00:16:13.358 "peer_address": { 00:16:13.358 "trtype": "TCP", 00:16:13.358 "adrfam": "IPv4", 00:16:13.358 "traddr": "10.0.0.1", 00:16:13.358 "trsvcid": "37442" 00:16:13.358 }, 00:16:13.358 "auth": { 00:16:13.358 "state": "completed", 00:16:13.358 "digest": "sha256", 00:16:13.358 "dhgroup": "ffdhe6144" 00:16:13.358 } 00:16:13.358 } 00:16:13.358 ]' 00:16:13.358 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:13.615 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:13.615 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:13.615 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:13.615 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:13.615 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:13.616 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:13.616 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:13.873 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:16:13.873 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:16:14.439 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:14.439 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:14.439 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:14.439 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.439 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.439 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.439 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:14.439 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:14.439 10:33:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:14.697 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe6144 3 00:16:14.697 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:14.697 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:14.697 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:14.697 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:14.697 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:14.697 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:14.697 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:14.697 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:14.697 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:14.697 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:14.697 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.697 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:14.955 00:16:14.955 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:14.955 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:14.955 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:15.213 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:15.213 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:15.213 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:15.213 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:15.213 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:15.213 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:15.213 { 00:16:15.213 "cntlid": 39, 00:16:15.213 "qid": 0, 00:16:15.213 "state": "enabled", 00:16:15.213 "thread": "nvmf_tgt_poll_group_000", 00:16:15.213 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:15.213 "listen_address": { 00:16:15.213 "trtype": "TCP", 00:16:15.213 "adrfam": "IPv4", 00:16:15.213 "traddr": "10.0.0.2", 00:16:15.213 "trsvcid": "4420" 00:16:15.213 }, 00:16:15.213 "peer_address": { 00:16:15.213 "trtype": "TCP", 00:16:15.213 "adrfam": "IPv4", 00:16:15.213 "traddr": "10.0.0.1", 00:16:15.213 "trsvcid": "37472" 00:16:15.213 }, 00:16:15.213 "auth": { 00:16:15.213 "state": "completed", 00:16:15.213 "digest": "sha256", 00:16:15.213 "dhgroup": "ffdhe6144" 00:16:15.213 } 00:16:15.213 } 00:16:15.213 ]' 00:16:15.213 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:15.213 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:15.213 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:15.213 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:15.213 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:15.213 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:15.213 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:15.213 10:33:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:15.472 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:16:15.472 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:16:16.039 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:16.039 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:16.039 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:16.039 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.039 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.039 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.039 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:16.039 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:16.039 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.039 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:16.297 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 0 00:16:16.297 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:16.297 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:16.297 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:16.297 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:16.297 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:16.298 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.298 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.298 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.298 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.298 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.298 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.298 10:33:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:16.865 00:16:16.865 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:16.865 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:16.865 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:16.865 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:16.865 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:16.865 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:16.865 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:16.865 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:16.865 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:16.865 { 00:16:16.865 "cntlid": 41, 00:16:16.865 "qid": 0, 00:16:16.865 "state": "enabled", 00:16:16.865 "thread": "nvmf_tgt_poll_group_000", 00:16:16.865 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:16.865 "listen_address": { 00:16:16.865 "trtype": "TCP", 00:16:16.865 "adrfam": "IPv4", 00:16:16.865 "traddr": "10.0.0.2", 00:16:16.865 "trsvcid": "4420" 00:16:16.865 }, 00:16:16.865 "peer_address": { 00:16:16.865 "trtype": "TCP", 00:16:16.865 "adrfam": "IPv4", 00:16:16.865 "traddr": "10.0.0.1", 00:16:16.865 "trsvcid": "37506" 00:16:16.865 }, 00:16:16.865 "auth": { 00:16:16.865 "state": "completed", 00:16:16.865 "digest": "sha256", 00:16:16.865 "dhgroup": "ffdhe8192" 00:16:16.865 } 00:16:16.865 } 00:16:16.865 ]' 00:16:16.865 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:16.865 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:16.865 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:17.123 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:17.123 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:17.123 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:17.123 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:17.123 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:17.381 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:16:17.381 10:33:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:17.947 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 1 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:17.947 10:33:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:18.513 00:16:18.513 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:18.513 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:18.513 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:18.772 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:18.772 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:18.772 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:18.772 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:18.772 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:18.772 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:18.772 { 00:16:18.772 "cntlid": 43, 00:16:18.772 "qid": 0, 00:16:18.772 "state": "enabled", 00:16:18.772 "thread": "nvmf_tgt_poll_group_000", 00:16:18.772 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:18.772 "listen_address": { 00:16:18.772 "trtype": "TCP", 00:16:18.772 "adrfam": "IPv4", 00:16:18.772 "traddr": "10.0.0.2", 00:16:18.772 "trsvcid": "4420" 00:16:18.772 }, 00:16:18.772 "peer_address": { 00:16:18.772 "trtype": "TCP", 00:16:18.772 "adrfam": "IPv4", 00:16:18.772 "traddr": "10.0.0.1", 00:16:18.772 "trsvcid": "37542" 00:16:18.772 }, 00:16:18.772 "auth": { 00:16:18.772 "state": "completed", 00:16:18.772 "digest": "sha256", 00:16:18.772 "dhgroup": "ffdhe8192" 00:16:18.772 } 00:16:18.772 } 00:16:18.772 ]' 00:16:18.772 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:18.772 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:18.772 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:18.772 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:18.772 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:18.772 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:18.772 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:18.772 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:19.031 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:16:19.031 10:33:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:16:19.595 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:19.595 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:19.595 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:19.595 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.595 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.595 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.595 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:19.595 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:19.595 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:19.853 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 2 00:16:19.853 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:19.853 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:19.853 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:19.853 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:19.853 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:19.853 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.853 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.853 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:19.853 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.853 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.853 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:19.853 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:20.419 00:16:20.419 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:20.419 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:20.419 10:34:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:20.419 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:20.419 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:20.419 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:20.419 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:20.419 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:20.419 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:20.419 { 00:16:20.419 "cntlid": 45, 00:16:20.419 "qid": 0, 00:16:20.419 "state": "enabled", 00:16:20.419 "thread": "nvmf_tgt_poll_group_000", 00:16:20.419 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:20.419 "listen_address": { 00:16:20.419 "trtype": "TCP", 00:16:20.419 "adrfam": "IPv4", 00:16:20.419 "traddr": "10.0.0.2", 00:16:20.419 "trsvcid": "4420" 00:16:20.420 }, 00:16:20.420 "peer_address": { 00:16:20.420 "trtype": "TCP", 00:16:20.420 "adrfam": "IPv4", 00:16:20.420 "traddr": "10.0.0.1", 00:16:20.420 "trsvcid": "53244" 00:16:20.420 }, 00:16:20.420 "auth": { 00:16:20.420 "state": "completed", 00:16:20.420 "digest": "sha256", 00:16:20.420 "dhgroup": "ffdhe8192" 00:16:20.420 } 00:16:20.420 } 00:16:20.420 ]' 00:16:20.420 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:20.677 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:20.677 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:20.677 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:20.677 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:20.677 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:20.677 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:20.677 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:20.935 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:16:20.935 10:34:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:16:21.500 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:21.500 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:21.500 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:21.501 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.501 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.501 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.501 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:21.501 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:21.501 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:21.501 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha256 ffdhe8192 3 00:16:21.501 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:21.501 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha256 00:16:21.501 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:21.501 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:21.501 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:21.501 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:21.501 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:21.501 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:21.758 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:21.758 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:21.758 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:21.758 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:22.015 00:16:22.015 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:22.015 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:22.015 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:22.273 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:22.273 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:22.273 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:22.273 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:22.273 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:22.273 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:22.273 { 00:16:22.273 "cntlid": 47, 00:16:22.273 "qid": 0, 00:16:22.273 "state": "enabled", 00:16:22.273 "thread": "nvmf_tgt_poll_group_000", 00:16:22.273 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:22.273 "listen_address": { 00:16:22.273 "trtype": "TCP", 00:16:22.273 "adrfam": "IPv4", 00:16:22.273 "traddr": "10.0.0.2", 00:16:22.273 "trsvcid": "4420" 00:16:22.273 }, 00:16:22.273 "peer_address": { 00:16:22.273 "trtype": "TCP", 00:16:22.273 "adrfam": "IPv4", 00:16:22.273 "traddr": "10.0.0.1", 00:16:22.273 "trsvcid": "53268" 00:16:22.273 }, 00:16:22.273 "auth": { 00:16:22.273 "state": "completed", 00:16:22.273 "digest": "sha256", 00:16:22.273 "dhgroup": "ffdhe8192" 00:16:22.273 } 00:16:22.273 } 00:16:22.273 ]' 00:16:22.273 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:22.273 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:22.273 10:34:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:22.542 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:22.542 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:22.542 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:22.542 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:22.542 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:22.542 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:16:22.542 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:16:23.108 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:23.108 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:23.108 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:23.108 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.108 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.108 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.108 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:16:23.108 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:23.108 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:23.366 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:23.366 10:34:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:23.366 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 0 00:16:23.366 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:23.366 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:23.366 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:23.366 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:23.366 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:23.366 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.366 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.366 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.366 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.366 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.366 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.366 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:23.624 00:16:23.624 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:23.624 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:23.624 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:23.882 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:23.882 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:23.882 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.882 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:23.882 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.882 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:23.882 { 00:16:23.882 "cntlid": 49, 00:16:23.882 "qid": 0, 00:16:23.882 "state": "enabled", 00:16:23.882 "thread": "nvmf_tgt_poll_group_000", 00:16:23.882 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:23.882 "listen_address": { 00:16:23.882 "trtype": "TCP", 00:16:23.882 "adrfam": "IPv4", 00:16:23.882 "traddr": "10.0.0.2", 00:16:23.882 "trsvcid": "4420" 00:16:23.882 }, 00:16:23.882 "peer_address": { 00:16:23.882 "trtype": "TCP", 00:16:23.882 "adrfam": "IPv4", 00:16:23.882 "traddr": "10.0.0.1", 00:16:23.882 "trsvcid": "53292" 00:16:23.882 }, 00:16:23.882 "auth": { 00:16:23.882 "state": "completed", 00:16:23.882 "digest": "sha384", 00:16:23.882 "dhgroup": "null" 00:16:23.882 } 00:16:23.882 } 00:16:23.882 ]' 00:16:23.882 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:23.882 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:23.882 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:23.882 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:23.882 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:24.141 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:24.141 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:24.141 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:24.141 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:16:24.141 10:34:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:16:24.707 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:24.707 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:24.707 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:24.707 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.707 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.707 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.707 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:24.707 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:24.707 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:24.965 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 1 00:16:24.965 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:24.965 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:24.965 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:24.965 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:24.965 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:24.965 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.965 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:24.965 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.965 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:24.965 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.965 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:24.965 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:25.223 00:16:25.223 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:25.223 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:25.223 10:34:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:25.481 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:25.481 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:25.481 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:25.481 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.481 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:25.481 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:25.481 { 00:16:25.481 "cntlid": 51, 00:16:25.481 "qid": 0, 00:16:25.481 "state": "enabled", 00:16:25.481 "thread": "nvmf_tgt_poll_group_000", 00:16:25.481 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:25.481 "listen_address": { 00:16:25.481 "trtype": "TCP", 00:16:25.481 "adrfam": "IPv4", 00:16:25.481 "traddr": "10.0.0.2", 00:16:25.481 "trsvcid": "4420" 00:16:25.481 }, 00:16:25.481 "peer_address": { 00:16:25.482 "trtype": "TCP", 00:16:25.482 "adrfam": "IPv4", 00:16:25.482 "traddr": "10.0.0.1", 00:16:25.482 "trsvcid": "53312" 00:16:25.482 }, 00:16:25.482 "auth": { 00:16:25.482 "state": "completed", 00:16:25.482 "digest": "sha384", 00:16:25.482 "dhgroup": "null" 00:16:25.482 } 00:16:25.482 } 00:16:25.482 ]' 00:16:25.482 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:25.482 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:25.482 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:25.482 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:25.482 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:25.482 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:25.482 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:25.482 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:25.740 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:16:25.740 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:16:26.306 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:26.306 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:26.306 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:26.306 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.306 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.306 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.306 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:26.306 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:26.306 10:34:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:26.564 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 2 00:16:26.564 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:26.564 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:26.565 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:26.565 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:26.565 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:26.565 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.565 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.565 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.565 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:26.565 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.565 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.565 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:26.823 00:16:26.823 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:26.823 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:26.823 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.082 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.082 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.082 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.082 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.082 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.082 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:27.082 { 00:16:27.082 "cntlid": 53, 00:16:27.082 "qid": 0, 00:16:27.082 "state": "enabled", 00:16:27.082 "thread": "nvmf_tgt_poll_group_000", 00:16:27.082 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:27.082 "listen_address": { 00:16:27.082 "trtype": "TCP", 00:16:27.082 "adrfam": "IPv4", 00:16:27.082 "traddr": "10.0.0.2", 00:16:27.082 "trsvcid": "4420" 00:16:27.082 }, 00:16:27.082 "peer_address": { 00:16:27.082 "trtype": "TCP", 00:16:27.082 "adrfam": "IPv4", 00:16:27.082 "traddr": "10.0.0.1", 00:16:27.082 "trsvcid": "53332" 00:16:27.082 }, 00:16:27.082 "auth": { 00:16:27.082 "state": "completed", 00:16:27.082 "digest": "sha384", 00:16:27.082 "dhgroup": "null" 00:16:27.082 } 00:16:27.082 } 00:16:27.082 ]' 00:16:27.082 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:27.082 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:27.082 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:27.082 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:27.082 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:27.082 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:27.082 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:27.082 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:27.340 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:16:27.340 10:34:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:16:27.907 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:27.907 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:27.907 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:27.907 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.907 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.907 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.907 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:27.907 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:27.907 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:16:28.166 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 null 3 00:16:28.166 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:28.166 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:28.166 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:16:28.166 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:28.166 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.166 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:28.166 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.166 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.166 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.166 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:28.166 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.166 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:28.424 00:16:28.425 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:28.425 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:28.425 10:34:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:28.425 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:28.693 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:28.693 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:28.693 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.693 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:28.693 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:28.693 { 00:16:28.693 "cntlid": 55, 00:16:28.693 "qid": 0, 00:16:28.693 "state": "enabled", 00:16:28.693 "thread": "nvmf_tgt_poll_group_000", 00:16:28.693 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:28.693 "listen_address": { 00:16:28.693 "trtype": "TCP", 00:16:28.693 "adrfam": "IPv4", 00:16:28.693 "traddr": "10.0.0.2", 00:16:28.693 "trsvcid": "4420" 00:16:28.693 }, 00:16:28.693 "peer_address": { 00:16:28.693 "trtype": "TCP", 00:16:28.693 "adrfam": "IPv4", 00:16:28.693 "traddr": "10.0.0.1", 00:16:28.693 "trsvcid": "53348" 00:16:28.693 }, 00:16:28.693 "auth": { 00:16:28.693 "state": "completed", 00:16:28.693 "digest": "sha384", 00:16:28.693 "dhgroup": "null" 00:16:28.693 } 00:16:28.693 } 00:16:28.693 ]' 00:16:28.693 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:28.693 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:28.693 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:28.693 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:16:28.693 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:28.693 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.693 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.693 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.952 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:16:28.952 10:34:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:16:29.519 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:29.520 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:29.520 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:29.520 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.520 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.520 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.520 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:29.520 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:29.520 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:29.520 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:29.778 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 0 00:16:29.778 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:29.778 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:29.778 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:29.778 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:29.778 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:29.778 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.778 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:29.778 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.778 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:29.778 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.778 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:29.778 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:30.037 00:16:30.037 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:30.037 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.037 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:30.037 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.037 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.037 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:30.037 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.037 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:30.037 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:30.037 { 00:16:30.037 "cntlid": 57, 00:16:30.037 "qid": 0, 00:16:30.037 "state": "enabled", 00:16:30.037 "thread": "nvmf_tgt_poll_group_000", 00:16:30.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:30.037 "listen_address": { 00:16:30.037 "trtype": "TCP", 00:16:30.037 "adrfam": "IPv4", 00:16:30.037 "traddr": "10.0.0.2", 00:16:30.037 "trsvcid": "4420" 00:16:30.037 }, 00:16:30.037 "peer_address": { 00:16:30.037 "trtype": "TCP", 00:16:30.037 "adrfam": "IPv4", 00:16:30.037 "traddr": "10.0.0.1", 00:16:30.037 "trsvcid": "44254" 00:16:30.037 }, 00:16:30.037 "auth": { 00:16:30.037 "state": "completed", 00:16:30.037 "digest": "sha384", 00:16:30.037 "dhgroup": "ffdhe2048" 00:16:30.037 } 00:16:30.037 } 00:16:30.037 ]' 00:16:30.037 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:30.296 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:30.296 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:30.296 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:30.296 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:30.296 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.296 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.296 10:34:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:30.554 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:16:30.554 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:16:31.121 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.121 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.121 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:31.121 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.121 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.121 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.121 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:31.121 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:31.121 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:31.121 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 1 00:16:31.121 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:31.121 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:31.121 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:31.121 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:31.121 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.121 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.121 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.121 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.121 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.122 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.122 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.122 10:34:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:31.380 00:16:31.638 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:31.638 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:31.638 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:31.638 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:31.638 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:31.638 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:31.638 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.638 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:31.638 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:31.638 { 00:16:31.638 "cntlid": 59, 00:16:31.638 "qid": 0, 00:16:31.638 "state": "enabled", 00:16:31.638 "thread": "nvmf_tgt_poll_group_000", 00:16:31.638 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:31.638 "listen_address": { 00:16:31.638 "trtype": "TCP", 00:16:31.638 "adrfam": "IPv4", 00:16:31.638 "traddr": "10.0.0.2", 00:16:31.638 "trsvcid": "4420" 00:16:31.638 }, 00:16:31.638 "peer_address": { 00:16:31.638 "trtype": "TCP", 00:16:31.638 "adrfam": "IPv4", 00:16:31.638 "traddr": "10.0.0.1", 00:16:31.638 "trsvcid": "44278" 00:16:31.638 }, 00:16:31.638 "auth": { 00:16:31.638 "state": "completed", 00:16:31.638 "digest": "sha384", 00:16:31.638 "dhgroup": "ffdhe2048" 00:16:31.638 } 00:16:31.638 } 00:16:31.638 ]' 00:16:31.638 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:31.638 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:31.896 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:31.896 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:31.896 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:31.896 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:31.896 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:31.896 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.154 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:16:32.154 10:34:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.719 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 2 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.719 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:32.977 00:16:32.977 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:32.977 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:32.977 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.235 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.235 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.236 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:33.236 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.236 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:33.236 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:33.236 { 00:16:33.236 "cntlid": 61, 00:16:33.236 "qid": 0, 00:16:33.236 "state": "enabled", 00:16:33.236 "thread": "nvmf_tgt_poll_group_000", 00:16:33.236 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:33.236 "listen_address": { 00:16:33.236 "trtype": "TCP", 00:16:33.236 "adrfam": "IPv4", 00:16:33.236 "traddr": "10.0.0.2", 00:16:33.236 "trsvcid": "4420" 00:16:33.236 }, 00:16:33.236 "peer_address": { 00:16:33.236 "trtype": "TCP", 00:16:33.236 "adrfam": "IPv4", 00:16:33.236 "traddr": "10.0.0.1", 00:16:33.236 "trsvcid": "44292" 00:16:33.236 }, 00:16:33.236 "auth": { 00:16:33.236 "state": "completed", 00:16:33.236 "digest": "sha384", 00:16:33.236 "dhgroup": "ffdhe2048" 00:16:33.236 } 00:16:33.236 } 00:16:33.236 ]' 00:16:33.236 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:33.236 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:33.236 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:33.236 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.236 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:33.494 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.494 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.494 10:34:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:33.494 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:16:33.494 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:16:34.061 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.061 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.061 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:34.061 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.061 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.061 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.061 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:34.061 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:34.061 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:16:34.320 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe2048 3 00:16:34.320 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:34.320 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:34.320 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:16:34.320 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:34.320 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.320 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:34.320 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.320 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.320 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.320 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:34.320 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.320 10:34:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:34.577 00:16:34.577 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:34.577 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:34.577 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:34.835 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:34.835 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:34.835 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:34.835 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.835 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:34.835 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:34.835 { 00:16:34.835 "cntlid": 63, 00:16:34.835 "qid": 0, 00:16:34.835 "state": "enabled", 00:16:34.835 "thread": "nvmf_tgt_poll_group_000", 00:16:34.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:34.835 "listen_address": { 00:16:34.835 "trtype": "TCP", 00:16:34.835 "adrfam": "IPv4", 00:16:34.835 "traddr": "10.0.0.2", 00:16:34.835 "trsvcid": "4420" 00:16:34.835 }, 00:16:34.835 "peer_address": { 00:16:34.835 "trtype": "TCP", 00:16:34.835 "adrfam": "IPv4", 00:16:34.835 "traddr": "10.0.0.1", 00:16:34.835 "trsvcid": "44316" 00:16:34.835 }, 00:16:34.835 "auth": { 00:16:34.835 "state": "completed", 00:16:34.835 "digest": "sha384", 00:16:34.835 "dhgroup": "ffdhe2048" 00:16:34.835 } 00:16:34.835 } 00:16:34.835 ]' 00:16:34.835 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:34.835 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:34.835 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:34.835 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:34.835 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:34.835 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:34.835 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:34.835 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.093 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:16:35.093 10:34:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:16:35.661 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.661 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.661 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:35.661 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.661 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.661 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.661 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:35.661 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:35.661 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:35.661 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:35.920 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 0 00:16:35.920 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:35.920 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:35.920 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:35.920 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:35.920 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:35.920 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.920 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:35.920 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.920 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:35.920 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.920 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:35.920 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:36.179 00:16:36.179 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:36.179 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:36.179 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.437 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.437 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.437 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:36.437 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.437 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:36.437 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:36.437 { 00:16:36.437 "cntlid": 65, 00:16:36.437 "qid": 0, 00:16:36.437 "state": "enabled", 00:16:36.437 "thread": "nvmf_tgt_poll_group_000", 00:16:36.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:36.437 "listen_address": { 00:16:36.437 "trtype": "TCP", 00:16:36.437 "adrfam": "IPv4", 00:16:36.437 "traddr": "10.0.0.2", 00:16:36.437 "trsvcid": "4420" 00:16:36.437 }, 00:16:36.437 "peer_address": { 00:16:36.437 "trtype": "TCP", 00:16:36.437 "adrfam": "IPv4", 00:16:36.437 "traddr": "10.0.0.1", 00:16:36.437 "trsvcid": "44336" 00:16:36.437 }, 00:16:36.437 "auth": { 00:16:36.437 "state": "completed", 00:16:36.437 "digest": "sha384", 00:16:36.437 "dhgroup": "ffdhe3072" 00:16:36.437 } 00:16:36.437 } 00:16:36.437 ]' 00:16:36.437 10:34:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:36.437 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:36.437 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:36.437 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:36.437 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:36.437 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.437 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.437 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.695 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:16:36.695 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:16:37.262 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.262 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.262 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:37.262 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.262 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.262 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.262 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:37.262 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:37.262 10:34:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:37.520 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 1 00:16:37.520 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:37.520 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:37.520 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:37.520 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:37.520 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.520 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.520 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:37.520 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.520 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:37.520 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.520 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.520 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:37.778 00:16:37.778 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:37.778 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:37.778 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.037 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.037 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.037 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.037 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.037 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.037 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:38.037 { 00:16:38.037 "cntlid": 67, 00:16:38.037 "qid": 0, 00:16:38.037 "state": "enabled", 00:16:38.037 "thread": "nvmf_tgt_poll_group_000", 00:16:38.037 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:38.037 "listen_address": { 00:16:38.037 "trtype": "TCP", 00:16:38.037 "adrfam": "IPv4", 00:16:38.037 "traddr": "10.0.0.2", 00:16:38.037 "trsvcid": "4420" 00:16:38.037 }, 00:16:38.037 "peer_address": { 00:16:38.037 "trtype": "TCP", 00:16:38.037 "adrfam": "IPv4", 00:16:38.037 "traddr": "10.0.0.1", 00:16:38.037 "trsvcid": "44378" 00:16:38.037 }, 00:16:38.037 "auth": { 00:16:38.037 "state": "completed", 00:16:38.037 "digest": "sha384", 00:16:38.037 "dhgroup": "ffdhe3072" 00:16:38.037 } 00:16:38.037 } 00:16:38.037 ]' 00:16:38.037 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:38.037 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:38.037 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:38.037 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:38.037 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:38.037 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.037 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.037 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.296 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:16:38.296 10:34:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:16:38.864 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.864 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.864 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:38.864 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:38.864 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.864 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:38.864 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:38.864 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:38.864 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:39.124 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 2 00:16:39.124 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:39.124 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:39.124 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:39.124 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:39.124 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.124 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.124 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.124 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.124 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.124 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.124 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.124 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:39.382 00:16:39.382 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:39.382 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:39.382 10:34:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.641 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.641 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.641 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.641 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.641 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.641 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:39.641 { 00:16:39.641 "cntlid": 69, 00:16:39.641 "qid": 0, 00:16:39.641 "state": "enabled", 00:16:39.641 "thread": "nvmf_tgt_poll_group_000", 00:16:39.641 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:39.641 "listen_address": { 00:16:39.641 "trtype": "TCP", 00:16:39.641 "adrfam": "IPv4", 00:16:39.641 "traddr": "10.0.0.2", 00:16:39.641 "trsvcid": "4420" 00:16:39.641 }, 00:16:39.641 "peer_address": { 00:16:39.641 "trtype": "TCP", 00:16:39.641 "adrfam": "IPv4", 00:16:39.641 "traddr": "10.0.0.1", 00:16:39.641 "trsvcid": "46900" 00:16:39.641 }, 00:16:39.641 "auth": { 00:16:39.641 "state": "completed", 00:16:39.641 "digest": "sha384", 00:16:39.641 "dhgroup": "ffdhe3072" 00:16:39.641 } 00:16:39.641 } 00:16:39.641 ]' 00:16:39.641 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:39.641 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:39.641 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:39.641 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.641 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:39.641 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.641 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.641 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.900 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:16:39.900 10:34:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:16:40.468 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.468 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.468 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:40.468 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.468 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.468 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.468 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:40.468 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:40.468 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:16:40.727 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe3072 3 00:16:40.727 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:40.727 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:40.727 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:16:40.727 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:40.727 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.727 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:40.727 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.727 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.727 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.727 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:40.727 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.727 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:40.987 00:16:40.987 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:40.987 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:40.987 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.246 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.246 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.246 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.246 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.246 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.246 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:41.246 { 00:16:41.246 "cntlid": 71, 00:16:41.246 "qid": 0, 00:16:41.246 "state": "enabled", 00:16:41.246 "thread": "nvmf_tgt_poll_group_000", 00:16:41.246 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:41.246 "listen_address": { 00:16:41.246 "trtype": "TCP", 00:16:41.246 "adrfam": "IPv4", 00:16:41.246 "traddr": "10.0.0.2", 00:16:41.246 "trsvcid": "4420" 00:16:41.246 }, 00:16:41.246 "peer_address": { 00:16:41.246 "trtype": "TCP", 00:16:41.246 "adrfam": "IPv4", 00:16:41.246 "traddr": "10.0.0.1", 00:16:41.246 "trsvcid": "46940" 00:16:41.246 }, 00:16:41.246 "auth": { 00:16:41.246 "state": "completed", 00:16:41.246 "digest": "sha384", 00:16:41.246 "dhgroup": "ffdhe3072" 00:16:41.246 } 00:16:41.246 } 00:16:41.246 ]' 00:16:41.246 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:41.246 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:41.246 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:41.246 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:41.246 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:41.246 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.246 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.246 10:34:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.505 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:16:41.505 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:16:42.071 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:42.071 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:42.071 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:42.071 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.071 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.071 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.071 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:42.071 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:42.071 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:42.072 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:42.329 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 0 00:16:42.330 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:42.330 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:42.330 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:42.330 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:42.330 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.330 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.330 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.330 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.330 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.330 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.330 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.330 10:34:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:42.587 00:16:42.587 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:42.587 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:42.587 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.845 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.845 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.845 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:42.845 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.845 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:42.845 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:42.845 { 00:16:42.845 "cntlid": 73, 00:16:42.845 "qid": 0, 00:16:42.845 "state": "enabled", 00:16:42.845 "thread": "nvmf_tgt_poll_group_000", 00:16:42.845 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:42.846 "listen_address": { 00:16:42.846 "trtype": "TCP", 00:16:42.846 "adrfam": "IPv4", 00:16:42.846 "traddr": "10.0.0.2", 00:16:42.846 "trsvcid": "4420" 00:16:42.846 }, 00:16:42.846 "peer_address": { 00:16:42.846 "trtype": "TCP", 00:16:42.846 "adrfam": "IPv4", 00:16:42.846 "traddr": "10.0.0.1", 00:16:42.846 "trsvcid": "46982" 00:16:42.846 }, 00:16:42.846 "auth": { 00:16:42.846 "state": "completed", 00:16:42.846 "digest": "sha384", 00:16:42.846 "dhgroup": "ffdhe4096" 00:16:42.846 } 00:16:42.846 } 00:16:42.846 ]' 00:16:42.846 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:42.846 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:42.846 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:42.846 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:42.846 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:42.846 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.846 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.846 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:43.104 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:16:43.104 10:34:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:16:43.742 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.742 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.742 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:43.742 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:43.743 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.743 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:43.743 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:43.743 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:43.743 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:44.005 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 1 00:16:44.005 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:44.005 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:44.005 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:44.005 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:44.005 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:44.005 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.005 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.005 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.005 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.005 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.005 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.005 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:44.264 00:16:44.264 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:44.264 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:44.264 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.264 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.264 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.264 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.264 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.264 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.264 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:44.264 { 00:16:44.264 "cntlid": 75, 00:16:44.264 "qid": 0, 00:16:44.264 "state": "enabled", 00:16:44.264 "thread": "nvmf_tgt_poll_group_000", 00:16:44.264 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:44.264 "listen_address": { 00:16:44.264 "trtype": "TCP", 00:16:44.264 "adrfam": "IPv4", 00:16:44.264 "traddr": "10.0.0.2", 00:16:44.264 "trsvcid": "4420" 00:16:44.264 }, 00:16:44.264 "peer_address": { 00:16:44.264 "trtype": "TCP", 00:16:44.264 "adrfam": "IPv4", 00:16:44.264 "traddr": "10.0.0.1", 00:16:44.264 "trsvcid": "46992" 00:16:44.264 }, 00:16:44.264 "auth": { 00:16:44.264 "state": "completed", 00:16:44.264 "digest": "sha384", 00:16:44.264 "dhgroup": "ffdhe4096" 00:16:44.264 } 00:16:44.264 } 00:16:44.264 ]' 00:16:44.264 10:34:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:44.522 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:44.522 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:44.522 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:44.522 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:44.522 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.522 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.522 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.781 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:16:44.781 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:16:45.348 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:45.348 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:45.348 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:45.348 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.348 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.348 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.348 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:45.348 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:45.348 10:34:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:45.610 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 2 00:16:45.610 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:45.610 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:45.610 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:45.610 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:45.610 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.610 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.610 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.610 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.610 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.610 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.610 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.610 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:45.869 00:16:45.869 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:45.869 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:45.869 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.869 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.869 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.869 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:45.869 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.869 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:45.869 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:45.869 { 00:16:45.869 "cntlid": 77, 00:16:45.869 "qid": 0, 00:16:45.869 "state": "enabled", 00:16:45.869 "thread": "nvmf_tgt_poll_group_000", 00:16:45.869 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:45.869 "listen_address": { 00:16:45.869 "trtype": "TCP", 00:16:45.869 "adrfam": "IPv4", 00:16:45.869 "traddr": "10.0.0.2", 00:16:45.869 "trsvcid": "4420" 00:16:45.869 }, 00:16:45.869 "peer_address": { 00:16:45.869 "trtype": "TCP", 00:16:45.869 "adrfam": "IPv4", 00:16:45.869 "traddr": "10.0.0.1", 00:16:45.869 "trsvcid": "47028" 00:16:45.869 }, 00:16:45.869 "auth": { 00:16:45.869 "state": "completed", 00:16:45.869 "digest": "sha384", 00:16:45.869 "dhgroup": "ffdhe4096" 00:16:45.869 } 00:16:45.869 } 00:16:45.869 ]' 00:16:45.869 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:46.127 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:46.127 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:46.127 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.127 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:46.128 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.128 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.128 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:46.386 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:16:46.386 10:34:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.952 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe4096 3 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:46.952 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:47.211 00:16:47.211 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:47.211 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:47.211 10:34:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:47.469 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:47.469 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:47.469 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:47.469 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.469 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:47.469 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:47.469 { 00:16:47.469 "cntlid": 79, 00:16:47.469 "qid": 0, 00:16:47.469 "state": "enabled", 00:16:47.469 "thread": "nvmf_tgt_poll_group_000", 00:16:47.469 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:47.469 "listen_address": { 00:16:47.469 "trtype": "TCP", 00:16:47.469 "adrfam": "IPv4", 00:16:47.469 "traddr": "10.0.0.2", 00:16:47.469 "trsvcid": "4420" 00:16:47.469 }, 00:16:47.469 "peer_address": { 00:16:47.469 "trtype": "TCP", 00:16:47.469 "adrfam": "IPv4", 00:16:47.469 "traddr": "10.0.0.1", 00:16:47.469 "trsvcid": "47064" 00:16:47.469 }, 00:16:47.469 "auth": { 00:16:47.469 "state": "completed", 00:16:47.469 "digest": "sha384", 00:16:47.469 "dhgroup": "ffdhe4096" 00:16:47.469 } 00:16:47.469 } 00:16:47.469 ]' 00:16:47.469 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:47.469 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:47.469 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:47.728 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:47.728 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:47.728 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:47.728 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:47.728 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.986 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:16:47.986 10:34:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:48.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 0 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:48.562 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:49.133 00:16:49.133 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:49.133 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:49.133 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.133 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.133 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.133 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:49.133 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.133 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:49.133 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:49.133 { 00:16:49.133 "cntlid": 81, 00:16:49.133 "qid": 0, 00:16:49.133 "state": "enabled", 00:16:49.133 "thread": "nvmf_tgt_poll_group_000", 00:16:49.133 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:49.133 "listen_address": { 00:16:49.133 "trtype": "TCP", 00:16:49.133 "adrfam": "IPv4", 00:16:49.133 "traddr": "10.0.0.2", 00:16:49.133 "trsvcid": "4420" 00:16:49.133 }, 00:16:49.133 "peer_address": { 00:16:49.133 "trtype": "TCP", 00:16:49.133 "adrfam": "IPv4", 00:16:49.133 "traddr": "10.0.0.1", 00:16:49.133 "trsvcid": "47098" 00:16:49.133 }, 00:16:49.133 "auth": { 00:16:49.133 "state": "completed", 00:16:49.133 "digest": "sha384", 00:16:49.133 "dhgroup": "ffdhe6144" 00:16:49.133 } 00:16:49.133 } 00:16:49.133 ]' 00:16:49.133 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:49.133 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:49.133 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:49.392 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:49.392 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:49.392 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.392 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.392 10:34:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:49.651 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:16:49.651 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.219 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 1 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.219 10:34:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:50.787 00:16:50.787 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:50.787 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:50.787 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:50.787 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:50.787 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:50.787 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:50.787 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.787 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:50.787 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:50.787 { 00:16:50.787 "cntlid": 83, 00:16:50.787 "qid": 0, 00:16:50.787 "state": "enabled", 00:16:50.787 "thread": "nvmf_tgt_poll_group_000", 00:16:50.787 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:50.787 "listen_address": { 00:16:50.787 "trtype": "TCP", 00:16:50.787 "adrfam": "IPv4", 00:16:50.787 "traddr": "10.0.0.2", 00:16:50.787 "trsvcid": "4420" 00:16:50.787 }, 00:16:50.787 "peer_address": { 00:16:50.787 "trtype": "TCP", 00:16:50.787 "adrfam": "IPv4", 00:16:50.787 "traddr": "10.0.0.1", 00:16:50.787 "trsvcid": "49412" 00:16:50.787 }, 00:16:50.787 "auth": { 00:16:50.787 "state": "completed", 00:16:50.787 "digest": "sha384", 00:16:50.787 "dhgroup": "ffdhe6144" 00:16:50.787 } 00:16:50.787 } 00:16:50.787 ]' 00:16:50.787 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:50.787 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:50.787 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:51.045 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.045 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:51.045 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.045 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.045 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.303 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:16:51.303 10:34:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:16:51.869 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:51.869 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:51.869 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:51.870 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.870 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.870 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.870 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:51.870 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:51.870 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:51.870 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 2 00:16:51.870 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:51.870 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:51.870 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:51.870 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:51.870 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.870 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.870 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:51.870 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.870 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:51.870 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.870 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:51.870 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:52.437 00:16:52.437 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:52.437 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:52.437 10:34:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.437 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.437 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.437 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:52.437 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.437 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:52.437 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:52.437 { 00:16:52.437 "cntlid": 85, 00:16:52.437 "qid": 0, 00:16:52.437 "state": "enabled", 00:16:52.437 "thread": "nvmf_tgt_poll_group_000", 00:16:52.437 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:52.437 "listen_address": { 00:16:52.437 "trtype": "TCP", 00:16:52.437 "adrfam": "IPv4", 00:16:52.437 "traddr": "10.0.0.2", 00:16:52.437 "trsvcid": "4420" 00:16:52.437 }, 00:16:52.437 "peer_address": { 00:16:52.437 "trtype": "TCP", 00:16:52.437 "adrfam": "IPv4", 00:16:52.437 "traddr": "10.0.0.1", 00:16:52.437 "trsvcid": "49442" 00:16:52.437 }, 00:16:52.437 "auth": { 00:16:52.437 "state": "completed", 00:16:52.437 "digest": "sha384", 00:16:52.437 "dhgroup": "ffdhe6144" 00:16:52.437 } 00:16:52.437 } 00:16:52.437 ]' 00:16:52.437 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:52.437 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:52.437 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:52.695 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:52.695 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:52.695 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:52.695 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:52.695 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:52.954 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:16:52.954 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:16:53.521 10:34:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:53.521 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:53.521 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:53.521 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.521 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.521 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.521 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:53.521 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:53.521 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:16:53.521 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe6144 3 00:16:53.521 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:53.521 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:53.521 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:16:53.521 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:16:53.521 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:53.521 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:16:53.521 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:53.521 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:53.521 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:53.522 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:16:53.522 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:53.522 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:16:54.088 00:16:54.088 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:54.088 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:54.088 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.088 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.088 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.088 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:54.088 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.088 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:54.088 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:54.088 { 00:16:54.088 "cntlid": 87, 00:16:54.088 "qid": 0, 00:16:54.088 "state": "enabled", 00:16:54.088 "thread": "nvmf_tgt_poll_group_000", 00:16:54.089 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:54.089 "listen_address": { 00:16:54.089 "trtype": "TCP", 00:16:54.089 "adrfam": "IPv4", 00:16:54.089 "traddr": "10.0.0.2", 00:16:54.089 "trsvcid": "4420" 00:16:54.089 }, 00:16:54.089 "peer_address": { 00:16:54.089 "trtype": "TCP", 00:16:54.089 "adrfam": "IPv4", 00:16:54.089 "traddr": "10.0.0.1", 00:16:54.089 "trsvcid": "49460" 00:16:54.089 }, 00:16:54.089 "auth": { 00:16:54.089 "state": "completed", 00:16:54.089 "digest": "sha384", 00:16:54.089 "dhgroup": "ffdhe6144" 00:16:54.089 } 00:16:54.089 } 00:16:54.089 ]' 00:16:54.089 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:54.089 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:54.089 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:54.348 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.348 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:54.348 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.348 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.348 10:34:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.607 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:16:54.607 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.175 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 0 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.175 10:34:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:55.743 00:16:55.743 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:55.743 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:55.743 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.001 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.001 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.001 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.001 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.001 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.001 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:56.001 { 00:16:56.001 "cntlid": 89, 00:16:56.001 "qid": 0, 00:16:56.001 "state": "enabled", 00:16:56.001 "thread": "nvmf_tgt_poll_group_000", 00:16:56.001 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:56.001 "listen_address": { 00:16:56.001 "trtype": "TCP", 00:16:56.001 "adrfam": "IPv4", 00:16:56.001 "traddr": "10.0.0.2", 00:16:56.001 "trsvcid": "4420" 00:16:56.001 }, 00:16:56.001 "peer_address": { 00:16:56.001 "trtype": "TCP", 00:16:56.001 "adrfam": "IPv4", 00:16:56.001 "traddr": "10.0.0.1", 00:16:56.001 "trsvcid": "49490" 00:16:56.001 }, 00:16:56.001 "auth": { 00:16:56.001 "state": "completed", 00:16:56.001 "digest": "sha384", 00:16:56.001 "dhgroup": "ffdhe8192" 00:16:56.001 } 00:16:56.001 } 00:16:56.001 ]' 00:16:56.001 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:56.001 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:56.001 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:56.001 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:56.001 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:56.001 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.001 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.001 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.260 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:16:56.260 10:34:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:16:56.827 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.827 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.827 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:56.827 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.827 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.827 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.827 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:56.827 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:56.827 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:57.085 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 1 00:16:57.085 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:57.085 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:57.085 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:57.085 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:16:57.085 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.085 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.085 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.085 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.085 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.085 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.085 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.085 10:34:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:57.652 00:16:57.652 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:57.652 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:57.652 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.652 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.652 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.652 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:57.652 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.652 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:57.652 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:57.652 { 00:16:57.652 "cntlid": 91, 00:16:57.652 "qid": 0, 00:16:57.652 "state": "enabled", 00:16:57.652 "thread": "nvmf_tgt_poll_group_000", 00:16:57.652 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:57.652 "listen_address": { 00:16:57.652 "trtype": "TCP", 00:16:57.652 "adrfam": "IPv4", 00:16:57.652 "traddr": "10.0.0.2", 00:16:57.652 "trsvcid": "4420" 00:16:57.652 }, 00:16:57.652 "peer_address": { 00:16:57.652 "trtype": "TCP", 00:16:57.652 "adrfam": "IPv4", 00:16:57.652 "traddr": "10.0.0.1", 00:16:57.652 "trsvcid": "49510" 00:16:57.652 }, 00:16:57.652 "auth": { 00:16:57.652 "state": "completed", 00:16:57.652 "digest": "sha384", 00:16:57.652 "dhgroup": "ffdhe8192" 00:16:57.652 } 00:16:57.652 } 00:16:57.652 ]' 00:16:57.652 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:57.911 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:57.911 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:57.911 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:57.911 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:57.911 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:57.911 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:57.911 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.170 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:16:58.170 10:34:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:16:58.736 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.736 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.736 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:16:58.736 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.736 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.736 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.736 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:16:58.736 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.736 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:16:58.994 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 2 00:16:58.994 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:16:58.994 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:16:58.994 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:16:58.994 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:16:58.994 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.994 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.994 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:58.994 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.994 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:58.994 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.994 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:58.994 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:59.252 00:16:59.252 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:16:59.252 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:16:59.252 10:34:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.511 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.511 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.511 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:59.511 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.511 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:59.511 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:16:59.511 { 00:16:59.511 "cntlid": 93, 00:16:59.511 "qid": 0, 00:16:59.511 "state": "enabled", 00:16:59.511 "thread": "nvmf_tgt_poll_group_000", 00:16:59.511 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:16:59.511 "listen_address": { 00:16:59.511 "trtype": "TCP", 00:16:59.511 "adrfam": "IPv4", 00:16:59.511 "traddr": "10.0.0.2", 00:16:59.511 "trsvcid": "4420" 00:16:59.511 }, 00:16:59.511 "peer_address": { 00:16:59.511 "trtype": "TCP", 00:16:59.511 "adrfam": "IPv4", 00:16:59.511 "traddr": "10.0.0.1", 00:16:59.511 "trsvcid": "49542" 00:16:59.511 }, 00:16:59.511 "auth": { 00:16:59.511 "state": "completed", 00:16:59.511 "digest": "sha384", 00:16:59.511 "dhgroup": "ffdhe8192" 00:16:59.511 } 00:16:59.511 } 00:16:59.511 ]' 00:16:59.511 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:16:59.511 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:16:59.511 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:16:59.770 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.770 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:16:59.770 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.770 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.770 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:59.770 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:16:59.770 10:34:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:17:00.339 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.598 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha384 ffdhe8192 3 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha384 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:00.598 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:01.164 00:17:01.164 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:01.164 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:01.164 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.422 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.422 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.423 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:01.423 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.423 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:01.423 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:01.423 { 00:17:01.423 "cntlid": 95, 00:17:01.423 "qid": 0, 00:17:01.423 "state": "enabled", 00:17:01.423 "thread": "nvmf_tgt_poll_group_000", 00:17:01.423 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:01.423 "listen_address": { 00:17:01.423 "trtype": "TCP", 00:17:01.423 "adrfam": "IPv4", 00:17:01.423 "traddr": "10.0.0.2", 00:17:01.423 "trsvcid": "4420" 00:17:01.423 }, 00:17:01.423 "peer_address": { 00:17:01.423 "trtype": "TCP", 00:17:01.423 "adrfam": "IPv4", 00:17:01.423 "traddr": "10.0.0.1", 00:17:01.423 "trsvcid": "33328" 00:17:01.423 }, 00:17:01.423 "auth": { 00:17:01.423 "state": "completed", 00:17:01.423 "digest": "sha384", 00:17:01.423 "dhgroup": "ffdhe8192" 00:17:01.423 } 00:17:01.423 } 00:17:01.423 ]' 00:17:01.423 10:34:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:01.423 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:01.423 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:01.423 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.423 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:01.423 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.423 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.423 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.681 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:17:01.681 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:17:02.248 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.248 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.248 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:02.248 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.248 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.248 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.248 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # for digest in "${digests[@]}" 00:17:02.248 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:02.248 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:02.248 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:02.248 10:34:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:02.507 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 0 00:17:02.507 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:02.507 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:02.507 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:02.507 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:02.507 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.507 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.507 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.507 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.507 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:02.507 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.507 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.507 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:02.766 00:17:02.766 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:02.766 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:02.766 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:02.766 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:02.766 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:02.766 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:02.766 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.025 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.025 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:03.025 { 00:17:03.025 "cntlid": 97, 00:17:03.025 "qid": 0, 00:17:03.025 "state": "enabled", 00:17:03.025 "thread": "nvmf_tgt_poll_group_000", 00:17:03.025 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:03.025 "listen_address": { 00:17:03.025 "trtype": "TCP", 00:17:03.025 "adrfam": "IPv4", 00:17:03.025 "traddr": "10.0.0.2", 00:17:03.025 "trsvcid": "4420" 00:17:03.025 }, 00:17:03.025 "peer_address": { 00:17:03.025 "trtype": "TCP", 00:17:03.025 "adrfam": "IPv4", 00:17:03.025 "traddr": "10.0.0.1", 00:17:03.025 "trsvcid": "33360" 00:17:03.025 }, 00:17:03.025 "auth": { 00:17:03.025 "state": "completed", 00:17:03.025 "digest": "sha512", 00:17:03.025 "dhgroup": "null" 00:17:03.025 } 00:17:03.025 } 00:17:03.025 ]' 00:17:03.025 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:03.025 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:03.025 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:03.025 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:03.025 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:03.025 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.025 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.025 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.284 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:17:03.284 10:34:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 1 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:03.851 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:04.110 00:17:04.110 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:04.110 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.110 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:04.368 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.368 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.368 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:04.368 10:34:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.368 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:04.368 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:04.368 { 00:17:04.368 "cntlid": 99, 00:17:04.368 "qid": 0, 00:17:04.368 "state": "enabled", 00:17:04.368 "thread": "nvmf_tgt_poll_group_000", 00:17:04.368 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:04.368 "listen_address": { 00:17:04.368 "trtype": "TCP", 00:17:04.368 "adrfam": "IPv4", 00:17:04.368 "traddr": "10.0.0.2", 00:17:04.368 "trsvcid": "4420" 00:17:04.368 }, 00:17:04.368 "peer_address": { 00:17:04.368 "trtype": "TCP", 00:17:04.368 "adrfam": "IPv4", 00:17:04.368 "traddr": "10.0.0.1", 00:17:04.368 "trsvcid": "33372" 00:17:04.368 }, 00:17:04.368 "auth": { 00:17:04.368 "state": "completed", 00:17:04.368 "digest": "sha512", 00:17:04.368 "dhgroup": "null" 00:17:04.368 } 00:17:04.368 } 00:17:04.368 ]' 00:17:04.368 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:04.368 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:04.368 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:04.626 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:04.626 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:04.626 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.626 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.626 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.626 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:17:04.626 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:17:05.195 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.195 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:05.195 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.195 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.195 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.195 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:05.195 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:05.195 10:34:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:05.454 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 2 00:17:05.454 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:05.454 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:05.454 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:05.454 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:05.454 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.454 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.454 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.454 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.454 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.454 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.455 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.455 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:05.714 00:17:05.714 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:05.714 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:05.714 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.972 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.972 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.972 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:05.972 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.972 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:05.972 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:05.972 { 00:17:05.972 "cntlid": 101, 00:17:05.972 "qid": 0, 00:17:05.972 "state": "enabled", 00:17:05.972 "thread": "nvmf_tgt_poll_group_000", 00:17:05.972 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:05.972 "listen_address": { 00:17:05.972 "trtype": "TCP", 00:17:05.972 "adrfam": "IPv4", 00:17:05.972 "traddr": "10.0.0.2", 00:17:05.972 "trsvcid": "4420" 00:17:05.972 }, 00:17:05.973 "peer_address": { 00:17:05.973 "trtype": "TCP", 00:17:05.973 "adrfam": "IPv4", 00:17:05.973 "traddr": "10.0.0.1", 00:17:05.973 "trsvcid": "33410" 00:17:05.973 }, 00:17:05.973 "auth": { 00:17:05.973 "state": "completed", 00:17:05.973 "digest": "sha512", 00:17:05.973 "dhgroup": "null" 00:17:05.973 } 00:17:05.973 } 00:17:05.973 ]' 00:17:05.973 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:05.973 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:05.973 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:05.973 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:05.973 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:05.973 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:05.973 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:05.973 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.231 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:17:06.231 10:34:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:17:06.800 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.800 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.800 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:06.800 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:06.800 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.800 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:06.800 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:06.800 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:06.800 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:07.060 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 null 3 00:17:07.060 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:07.060 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:07.060 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=null 00:17:07.060 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:07.060 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:07.060 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:07.060 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.060 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.060 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.060 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:07.060 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.060 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:07.319 00:17:07.319 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:07.319 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:07.319 10:34:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.578 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.578 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.578 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:07.578 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.578 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:07.578 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:07.578 { 00:17:07.578 "cntlid": 103, 00:17:07.578 "qid": 0, 00:17:07.578 "state": "enabled", 00:17:07.578 "thread": "nvmf_tgt_poll_group_000", 00:17:07.578 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:07.578 "listen_address": { 00:17:07.578 "trtype": "TCP", 00:17:07.578 "adrfam": "IPv4", 00:17:07.578 "traddr": "10.0.0.2", 00:17:07.578 "trsvcid": "4420" 00:17:07.578 }, 00:17:07.578 "peer_address": { 00:17:07.578 "trtype": "TCP", 00:17:07.578 "adrfam": "IPv4", 00:17:07.578 "traddr": "10.0.0.1", 00:17:07.578 "trsvcid": "33426" 00:17:07.578 }, 00:17:07.578 "auth": { 00:17:07.578 "state": "completed", 00:17:07.578 "digest": "sha512", 00:17:07.578 "dhgroup": "null" 00:17:07.578 } 00:17:07.578 } 00:17:07.578 ]' 00:17:07.578 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:07.578 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:07.578 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:07.578 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ null == \n\u\l\l ]] 00:17:07.578 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:07.578 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.578 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.578 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.838 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:17:07.838 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:17:08.405 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.405 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.405 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:08.405 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.405 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.405 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.405 10:34:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:08.405 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:08.405 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.405 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:08.663 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 0 00:17:08.663 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:08.663 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:08.663 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:08.663 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:08.663 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.663 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.663 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:08.663 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.663 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:08.663 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.663 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.663 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:08.921 00:17:08.921 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:08.921 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:08.921 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:09.180 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:09.180 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:09.180 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:09.180 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.180 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:09.180 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:09.180 { 00:17:09.180 "cntlid": 105, 00:17:09.180 "qid": 0, 00:17:09.180 "state": "enabled", 00:17:09.180 "thread": "nvmf_tgt_poll_group_000", 00:17:09.180 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:09.180 "listen_address": { 00:17:09.180 "trtype": "TCP", 00:17:09.180 "adrfam": "IPv4", 00:17:09.180 "traddr": "10.0.0.2", 00:17:09.180 "trsvcid": "4420" 00:17:09.180 }, 00:17:09.180 "peer_address": { 00:17:09.180 "trtype": "TCP", 00:17:09.180 "adrfam": "IPv4", 00:17:09.180 "traddr": "10.0.0.1", 00:17:09.180 "trsvcid": "33460" 00:17:09.180 }, 00:17:09.180 "auth": { 00:17:09.180 "state": "completed", 00:17:09.180 "digest": "sha512", 00:17:09.180 "dhgroup": "ffdhe2048" 00:17:09.180 } 00:17:09.180 } 00:17:09.180 ]' 00:17:09.180 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:09.180 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:09.180 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:09.180 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:09.180 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:09.180 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:09.180 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:09.180 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:09.438 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:17:09.438 10:34:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:17:10.003 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:10.003 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:10.003 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:10.003 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.003 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.003 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.003 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:10.003 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:10.003 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:10.263 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 1 00:17:10.263 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:10.263 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:10.263 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:10.263 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:10.263 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:10.263 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.263 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.263 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.263 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.263 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.263 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.263 10:34:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:10.522 00:17:10.522 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:10.522 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.522 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:10.522 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.522 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.522 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:10.522 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.522 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:10.522 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:10.522 { 00:17:10.522 "cntlid": 107, 00:17:10.522 "qid": 0, 00:17:10.522 "state": "enabled", 00:17:10.522 "thread": "nvmf_tgt_poll_group_000", 00:17:10.522 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:10.522 "listen_address": { 00:17:10.522 "trtype": "TCP", 00:17:10.522 "adrfam": "IPv4", 00:17:10.522 "traddr": "10.0.0.2", 00:17:10.522 "trsvcid": "4420" 00:17:10.522 }, 00:17:10.522 "peer_address": { 00:17:10.522 "trtype": "TCP", 00:17:10.522 "adrfam": "IPv4", 00:17:10.522 "traddr": "10.0.0.1", 00:17:10.522 "trsvcid": "41500" 00:17:10.522 }, 00:17:10.522 "auth": { 00:17:10.522 "state": "completed", 00:17:10.522 "digest": "sha512", 00:17:10.522 "dhgroup": "ffdhe2048" 00:17:10.522 } 00:17:10.522 } 00:17:10.522 ]' 00:17:10.522 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:10.781 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:10.781 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:10.781 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:10.781 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:10.781 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.781 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.781 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.039 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:17:11.039 10:34:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.604 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 2 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.604 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:11.862 00:17:11.862 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:11.862 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:11.862 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.121 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:12.121 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:12.121 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:12.121 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.121 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:12.121 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:12.121 { 00:17:12.121 "cntlid": 109, 00:17:12.121 "qid": 0, 00:17:12.121 "state": "enabled", 00:17:12.121 "thread": "nvmf_tgt_poll_group_000", 00:17:12.121 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:12.121 "listen_address": { 00:17:12.121 "trtype": "TCP", 00:17:12.121 "adrfam": "IPv4", 00:17:12.121 "traddr": "10.0.0.2", 00:17:12.121 "trsvcid": "4420" 00:17:12.121 }, 00:17:12.121 "peer_address": { 00:17:12.121 "trtype": "TCP", 00:17:12.121 "adrfam": "IPv4", 00:17:12.121 "traddr": "10.0.0.1", 00:17:12.121 "trsvcid": "41534" 00:17:12.121 }, 00:17:12.121 "auth": { 00:17:12.121 "state": "completed", 00:17:12.121 "digest": "sha512", 00:17:12.121 "dhgroup": "ffdhe2048" 00:17:12.121 } 00:17:12.121 } 00:17:12.121 ]' 00:17:12.121 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:12.121 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:12.121 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:12.379 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:12.379 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:12.379 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:12.379 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:12.379 10:34:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:12.638 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:17:12.638 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:17:13.205 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.205 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:13.205 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.205 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.205 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.205 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:13.205 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:13.205 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:13.205 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe2048 3 00:17:13.205 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:13.463 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:13.463 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe2048 00:17:13.463 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:13.463 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:13.463 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:13.463 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.463 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.463 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.463 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:13.463 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.463 10:34:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:13.463 00:17:13.722 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:13.722 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:13.722 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:13.722 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.722 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.722 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:13.722 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.722 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:13.722 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:13.722 { 00:17:13.722 "cntlid": 111, 00:17:13.722 "qid": 0, 00:17:13.722 "state": "enabled", 00:17:13.722 "thread": "nvmf_tgt_poll_group_000", 00:17:13.722 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:13.722 "listen_address": { 00:17:13.722 "trtype": "TCP", 00:17:13.722 "adrfam": "IPv4", 00:17:13.722 "traddr": "10.0.0.2", 00:17:13.722 "trsvcid": "4420" 00:17:13.722 }, 00:17:13.722 "peer_address": { 00:17:13.722 "trtype": "TCP", 00:17:13.722 "adrfam": "IPv4", 00:17:13.722 "traddr": "10.0.0.1", 00:17:13.722 "trsvcid": "41572" 00:17:13.722 }, 00:17:13.722 "auth": { 00:17:13.722 "state": "completed", 00:17:13.722 "digest": "sha512", 00:17:13.722 "dhgroup": "ffdhe2048" 00:17:13.722 } 00:17:13.722 } 00:17:13.722 ]' 00:17:13.722 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:13.722 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:13.722 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:13.981 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:13.981 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:13.981 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.981 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.981 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.981 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:17:14.239 10:34:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:14.805 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 0 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:14.805 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.063 00:17:15.063 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:15.063 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:15.063 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.323 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.323 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.323 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:15.323 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.323 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:15.323 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:15.323 { 00:17:15.323 "cntlid": 113, 00:17:15.323 "qid": 0, 00:17:15.323 "state": "enabled", 00:17:15.323 "thread": "nvmf_tgt_poll_group_000", 00:17:15.323 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:15.323 "listen_address": { 00:17:15.323 "trtype": "TCP", 00:17:15.323 "adrfam": "IPv4", 00:17:15.323 "traddr": "10.0.0.2", 00:17:15.323 "trsvcid": "4420" 00:17:15.323 }, 00:17:15.323 "peer_address": { 00:17:15.323 "trtype": "TCP", 00:17:15.323 "adrfam": "IPv4", 00:17:15.323 "traddr": "10.0.0.1", 00:17:15.323 "trsvcid": "41592" 00:17:15.323 }, 00:17:15.323 "auth": { 00:17:15.323 "state": "completed", 00:17:15.323 "digest": "sha512", 00:17:15.323 "dhgroup": "ffdhe3072" 00:17:15.323 } 00:17:15.323 } 00:17:15.323 ]' 00:17:15.323 10:34:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:15.323 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:15.323 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:15.323 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:15.323 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:15.581 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:15.581 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:15.581 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:15.581 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:17:15.581 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:17:16.148 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.148 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.148 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:16.148 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.148 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.148 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.148 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:16.148 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:16.148 10:34:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:16.406 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 1 00:17:16.406 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:16.406 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:16.406 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:16.406 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:16.406 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:16.406 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.406 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.406 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.406 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.406 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.406 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.406 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:16.665 00:17:16.665 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:16.665 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:16.665 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:16.922 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:16.922 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:16.922 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:16.922 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.922 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:16.922 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:16.922 { 00:17:16.922 "cntlid": 115, 00:17:16.922 "qid": 0, 00:17:16.922 "state": "enabled", 00:17:16.922 "thread": "nvmf_tgt_poll_group_000", 00:17:16.922 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:16.922 "listen_address": { 00:17:16.922 "trtype": "TCP", 00:17:16.922 "adrfam": "IPv4", 00:17:16.922 "traddr": "10.0.0.2", 00:17:16.922 "trsvcid": "4420" 00:17:16.922 }, 00:17:16.922 "peer_address": { 00:17:16.922 "trtype": "TCP", 00:17:16.922 "adrfam": "IPv4", 00:17:16.922 "traddr": "10.0.0.1", 00:17:16.922 "trsvcid": "41620" 00:17:16.922 }, 00:17:16.922 "auth": { 00:17:16.922 "state": "completed", 00:17:16.922 "digest": "sha512", 00:17:16.922 "dhgroup": "ffdhe3072" 00:17:16.922 } 00:17:16.922 } 00:17:16.922 ]' 00:17:16.922 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:16.922 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:16.922 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:16.922 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.922 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:17.180 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.180 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.180 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.180 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:17:17.180 10:34:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:17:17.744 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:17.744 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:17.744 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:17.744 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:17.744 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.744 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:17.744 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:17.744 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:17.744 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:18.002 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 2 00:17:18.002 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:18.002 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:18.002 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:18.002 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:18.002 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.002 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.002 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.002 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.002 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.002 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.002 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.002 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.260 00:17:18.260 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:18.260 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:18.260 10:34:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.518 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.519 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.519 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:18.519 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.519 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:18.519 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:18.519 { 00:17:18.519 "cntlid": 117, 00:17:18.519 "qid": 0, 00:17:18.519 "state": "enabled", 00:17:18.519 "thread": "nvmf_tgt_poll_group_000", 00:17:18.519 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:18.519 "listen_address": { 00:17:18.519 "trtype": "TCP", 00:17:18.519 "adrfam": "IPv4", 00:17:18.519 "traddr": "10.0.0.2", 00:17:18.519 "trsvcid": "4420" 00:17:18.519 }, 00:17:18.519 "peer_address": { 00:17:18.519 "trtype": "TCP", 00:17:18.519 "adrfam": "IPv4", 00:17:18.519 "traddr": "10.0.0.1", 00:17:18.519 "trsvcid": "41642" 00:17:18.519 }, 00:17:18.519 "auth": { 00:17:18.519 "state": "completed", 00:17:18.519 "digest": "sha512", 00:17:18.519 "dhgroup": "ffdhe3072" 00:17:18.519 } 00:17:18.519 } 00:17:18.519 ]' 00:17:18.519 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:18.519 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:18.519 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:18.519 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:18.519 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:18.775 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:18.775 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:18.775 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:18.775 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:17:18.775 10:34:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:17:19.341 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.341 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.341 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:19.341 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.341 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.341 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.341 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:19.341 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:19.341 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:19.599 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe3072 3 00:17:19.599 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:19.599 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:19.599 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe3072 00:17:19.599 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:19.599 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:19.599 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:19.599 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:19.599 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.599 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:19.599 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:19.599 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.599 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:19.858 00:17:19.858 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:19.858 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:19.858 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.116 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.116 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.116 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.116 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.116 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.116 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:20.116 { 00:17:20.116 "cntlid": 119, 00:17:20.116 "qid": 0, 00:17:20.116 "state": "enabled", 00:17:20.116 "thread": "nvmf_tgt_poll_group_000", 00:17:20.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:20.116 "listen_address": { 00:17:20.116 "trtype": "TCP", 00:17:20.116 "adrfam": "IPv4", 00:17:20.116 "traddr": "10.0.0.2", 00:17:20.116 "trsvcid": "4420" 00:17:20.116 }, 00:17:20.116 "peer_address": { 00:17:20.116 "trtype": "TCP", 00:17:20.116 "adrfam": "IPv4", 00:17:20.116 "traddr": "10.0.0.1", 00:17:20.116 "trsvcid": "51214" 00:17:20.116 }, 00:17:20.116 "auth": { 00:17:20.116 "state": "completed", 00:17:20.116 "digest": "sha512", 00:17:20.116 "dhgroup": "ffdhe3072" 00:17:20.116 } 00:17:20.116 } 00:17:20.116 ]' 00:17:20.116 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:20.116 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:20.116 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:20.116 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.116 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:20.374 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.374 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.374 10:35:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.375 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:17:20.375 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:17:20.941 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:20.941 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:20.941 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:20.941 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:20.941 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.941 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:20.941 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:20.941 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:20.941 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:20.941 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:21.218 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 0 00:17:21.218 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:21.218 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:21.218 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:21.218 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:21.218 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.218 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.218 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.218 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.218 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.218 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.218 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.218 10:35:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.505 00:17:21.505 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:21.505 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:21.505 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.795 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.795 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.795 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:21.795 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.795 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:21.795 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:21.795 { 00:17:21.795 "cntlid": 121, 00:17:21.795 "qid": 0, 00:17:21.795 "state": "enabled", 00:17:21.795 "thread": "nvmf_tgt_poll_group_000", 00:17:21.795 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:21.795 "listen_address": { 00:17:21.795 "trtype": "TCP", 00:17:21.795 "adrfam": "IPv4", 00:17:21.795 "traddr": "10.0.0.2", 00:17:21.795 "trsvcid": "4420" 00:17:21.795 }, 00:17:21.795 "peer_address": { 00:17:21.795 "trtype": "TCP", 00:17:21.795 "adrfam": "IPv4", 00:17:21.795 "traddr": "10.0.0.1", 00:17:21.795 "trsvcid": "51238" 00:17:21.795 }, 00:17:21.795 "auth": { 00:17:21.795 "state": "completed", 00:17:21.795 "digest": "sha512", 00:17:21.795 "dhgroup": "ffdhe4096" 00:17:21.795 } 00:17:21.795 } 00:17:21.795 ]' 00:17:21.795 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:21.795 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:21.795 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:21.795 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:21.795 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:21.795 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:21.795 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:21.795 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.054 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:17:22.054 10:35:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:17:22.620 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.620 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.620 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:22.620 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.620 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.620 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.620 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:22.620 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:22.620 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:22.879 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 1 00:17:22.879 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:22.879 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:22.879 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:22.879 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:22.879 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.879 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.879 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:22.879 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.879 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:22.879 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.879 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.879 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.138 00:17:23.138 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:23.138 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:23.138 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.398 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.398 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.398 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:23.398 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.398 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:23.398 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:23.398 { 00:17:23.398 "cntlid": 123, 00:17:23.398 "qid": 0, 00:17:23.398 "state": "enabled", 00:17:23.398 "thread": "nvmf_tgt_poll_group_000", 00:17:23.398 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:23.398 "listen_address": { 00:17:23.398 "trtype": "TCP", 00:17:23.398 "adrfam": "IPv4", 00:17:23.398 "traddr": "10.0.0.2", 00:17:23.398 "trsvcid": "4420" 00:17:23.398 }, 00:17:23.398 "peer_address": { 00:17:23.398 "trtype": "TCP", 00:17:23.398 "adrfam": "IPv4", 00:17:23.398 "traddr": "10.0.0.1", 00:17:23.398 "trsvcid": "51258" 00:17:23.398 }, 00:17:23.398 "auth": { 00:17:23.398 "state": "completed", 00:17:23.398 "digest": "sha512", 00:17:23.398 "dhgroup": "ffdhe4096" 00:17:23.398 } 00:17:23.398 } 00:17:23.398 ]' 00:17:23.398 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:23.398 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:23.398 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:23.398 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:23.398 10:35:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:23.398 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.398 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.398 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.656 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:17:23.656 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:17:24.224 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.224 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.224 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:24.224 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.224 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.224 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.224 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:24.224 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:24.224 10:35:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:24.482 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 2 00:17:24.482 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:24.482 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:24.482 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:24.482 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:24.482 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.482 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.482 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.482 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.482 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.482 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.482 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.482 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.741 00:17:24.741 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:24.741 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:24.741 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:24.999 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:24.999 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:24.999 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:24.999 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.999 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:24.999 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:24.999 { 00:17:24.999 "cntlid": 125, 00:17:24.999 "qid": 0, 00:17:24.999 "state": "enabled", 00:17:24.999 "thread": "nvmf_tgt_poll_group_000", 00:17:24.999 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:24.999 "listen_address": { 00:17:24.999 "trtype": "TCP", 00:17:24.999 "adrfam": "IPv4", 00:17:24.999 "traddr": "10.0.0.2", 00:17:24.999 "trsvcid": "4420" 00:17:24.999 }, 00:17:24.999 "peer_address": { 00:17:24.999 "trtype": "TCP", 00:17:24.999 "adrfam": "IPv4", 00:17:24.999 "traddr": "10.0.0.1", 00:17:24.999 "trsvcid": "51282" 00:17:24.999 }, 00:17:24.999 "auth": { 00:17:24.999 "state": "completed", 00:17:24.999 "digest": "sha512", 00:17:24.999 "dhgroup": "ffdhe4096" 00:17:24.999 } 00:17:24.999 } 00:17:24.999 ]' 00:17:24.999 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:24.999 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:24.999 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:24.999 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:24.999 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:24.999 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:24.999 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:24.999 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.258 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:17:25.258 10:35:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:17:25.826 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.826 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.826 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:25.826 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:25.826 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.826 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:25.826 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:25.826 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:25.826 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:26.085 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe4096 3 00:17:26.085 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:26.085 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:26.085 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe4096 00:17:26.085 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:26.085 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:26.085 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:26.085 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.085 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.085 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.085 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:26.085 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.085 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:26.343 00:17:26.343 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:26.343 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:26.343 10:35:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.602 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.602 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.602 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:26.602 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.602 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:26.602 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:26.602 { 00:17:26.602 "cntlid": 127, 00:17:26.602 "qid": 0, 00:17:26.602 "state": "enabled", 00:17:26.602 "thread": "nvmf_tgt_poll_group_000", 00:17:26.602 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:26.602 "listen_address": { 00:17:26.602 "trtype": "TCP", 00:17:26.602 "adrfam": "IPv4", 00:17:26.602 "traddr": "10.0.0.2", 00:17:26.602 "trsvcid": "4420" 00:17:26.602 }, 00:17:26.602 "peer_address": { 00:17:26.602 "trtype": "TCP", 00:17:26.602 "adrfam": "IPv4", 00:17:26.602 "traddr": "10.0.0.1", 00:17:26.602 "trsvcid": "51302" 00:17:26.602 }, 00:17:26.602 "auth": { 00:17:26.602 "state": "completed", 00:17:26.602 "digest": "sha512", 00:17:26.602 "dhgroup": "ffdhe4096" 00:17:26.602 } 00:17:26.602 } 00:17:26.602 ]' 00:17:26.602 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:26.602 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:26.602 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:26.602 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:26.602 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:26.602 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.602 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.602 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.860 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:17:26.860 10:35:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:17:27.427 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.427 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.427 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:27.427 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.427 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.427 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.427 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.427 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:27.427 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.427 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:27.686 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 0 00:17:27.686 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:27.686 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:27.686 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:27.686 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:27.686 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.686 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.686 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:27.686 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.686 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:27.686 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.686 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.686 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.944 00:17:27.944 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:27.944 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:27.944 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.203 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.203 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.203 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:28.203 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.203 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:28.203 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:28.203 { 00:17:28.203 "cntlid": 129, 00:17:28.203 "qid": 0, 00:17:28.203 "state": "enabled", 00:17:28.203 "thread": "nvmf_tgt_poll_group_000", 00:17:28.203 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:28.203 "listen_address": { 00:17:28.203 "trtype": "TCP", 00:17:28.203 "adrfam": "IPv4", 00:17:28.203 "traddr": "10.0.0.2", 00:17:28.203 "trsvcid": "4420" 00:17:28.203 }, 00:17:28.203 "peer_address": { 00:17:28.203 "trtype": "TCP", 00:17:28.203 "adrfam": "IPv4", 00:17:28.203 "traddr": "10.0.0.1", 00:17:28.203 "trsvcid": "51328" 00:17:28.203 }, 00:17:28.203 "auth": { 00:17:28.203 "state": "completed", 00:17:28.203 "digest": "sha512", 00:17:28.203 "dhgroup": "ffdhe6144" 00:17:28.203 } 00:17:28.203 } 00:17:28.203 ]' 00:17:28.203 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:28.203 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:28.203 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:28.203 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.203 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:28.203 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.203 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.203 10:35:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.461 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:17:28.461 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:17:29.027 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:29.027 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:29.027 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:29.027 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.027 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.027 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.027 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:29.027 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.027 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:29.285 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 1 00:17:29.285 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:29.285 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:29.285 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:29.285 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:29.285 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.285 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.285 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.285 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.285 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.285 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.285 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.285 10:35:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.543 00:17:29.543 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:29.543 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:29.543 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.801 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.801 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.801 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:29.801 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.801 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:29.801 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:29.801 { 00:17:29.801 "cntlid": 131, 00:17:29.801 "qid": 0, 00:17:29.801 "state": "enabled", 00:17:29.801 "thread": "nvmf_tgt_poll_group_000", 00:17:29.801 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:29.801 "listen_address": { 00:17:29.801 "trtype": "TCP", 00:17:29.801 "adrfam": "IPv4", 00:17:29.801 "traddr": "10.0.0.2", 00:17:29.801 "trsvcid": "4420" 00:17:29.801 }, 00:17:29.801 "peer_address": { 00:17:29.801 "trtype": "TCP", 00:17:29.801 "adrfam": "IPv4", 00:17:29.801 "traddr": "10.0.0.1", 00:17:29.801 "trsvcid": "37076" 00:17:29.801 }, 00:17:29.801 "auth": { 00:17:29.801 "state": "completed", 00:17:29.801 "digest": "sha512", 00:17:29.801 "dhgroup": "ffdhe6144" 00:17:29.801 } 00:17:29.801 } 00:17:29.801 ]' 00:17:29.801 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:29.801 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:29.801 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:29.801 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:30.059 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:30.059 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:30.059 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:30.059 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:30.059 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:17:30.059 10:35:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:17:30.626 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.626 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.626 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:30.626 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.626 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.885 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.885 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:30.885 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:30.885 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:30.885 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 2 00:17:30.885 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:30.885 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:30.885 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:30.885 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:30.885 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.885 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.885 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:30.885 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.885 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:30.885 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.885 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.885 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.452 00:17:31.452 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:31.452 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:31.452 10:35:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.452 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.452 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.452 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:31.452 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.452 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:31.452 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:31.452 { 00:17:31.452 "cntlid": 133, 00:17:31.452 "qid": 0, 00:17:31.452 "state": "enabled", 00:17:31.452 "thread": "nvmf_tgt_poll_group_000", 00:17:31.452 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:31.452 "listen_address": { 00:17:31.452 "trtype": "TCP", 00:17:31.452 "adrfam": "IPv4", 00:17:31.452 "traddr": "10.0.0.2", 00:17:31.452 "trsvcid": "4420" 00:17:31.452 }, 00:17:31.452 "peer_address": { 00:17:31.452 "trtype": "TCP", 00:17:31.452 "adrfam": "IPv4", 00:17:31.452 "traddr": "10.0.0.1", 00:17:31.452 "trsvcid": "37104" 00:17:31.452 }, 00:17:31.452 "auth": { 00:17:31.452 "state": "completed", 00:17:31.452 "digest": "sha512", 00:17:31.452 "dhgroup": "ffdhe6144" 00:17:31.452 } 00:17:31.452 } 00:17:31.452 ]' 00:17:31.452 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:31.452 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:31.452 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:31.711 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.711 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:31.711 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.711 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.711 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.973 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:17:31.973 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:17:32.540 10:35:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.540 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe6144 3 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe6144 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:32.540 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:33.107 00:17:33.108 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:33.108 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:33.108 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:33.108 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:33.108 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:33.108 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:33.108 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.108 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:33.108 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:33.108 { 00:17:33.108 "cntlid": 135, 00:17:33.108 "qid": 0, 00:17:33.108 "state": "enabled", 00:17:33.108 "thread": "nvmf_tgt_poll_group_000", 00:17:33.108 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:33.108 "listen_address": { 00:17:33.108 "trtype": "TCP", 00:17:33.108 "adrfam": "IPv4", 00:17:33.108 "traddr": "10.0.0.2", 00:17:33.108 "trsvcid": "4420" 00:17:33.108 }, 00:17:33.108 "peer_address": { 00:17:33.108 "trtype": "TCP", 00:17:33.108 "adrfam": "IPv4", 00:17:33.108 "traddr": "10.0.0.1", 00:17:33.108 "trsvcid": "37134" 00:17:33.108 }, 00:17:33.108 "auth": { 00:17:33.108 "state": "completed", 00:17:33.108 "digest": "sha512", 00:17:33.108 "dhgroup": "ffdhe6144" 00:17:33.108 } 00:17:33.108 } 00:17:33.108 ]' 00:17:33.108 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:33.108 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:33.108 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:33.366 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:33.366 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:33.366 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:33.366 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:33.366 10:35:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.624 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:17:33.624 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:34.192 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@119 -- # for dhgroup in "${dhgroups[@]}" 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 0 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.192 10:35:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.759 00:17:34.759 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:34.759 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.759 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:35.018 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:35.018 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:35.018 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.018 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.018 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.018 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:35.018 { 00:17:35.018 "cntlid": 137, 00:17:35.018 "qid": 0, 00:17:35.018 "state": "enabled", 00:17:35.018 "thread": "nvmf_tgt_poll_group_000", 00:17:35.018 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:35.018 "listen_address": { 00:17:35.018 "trtype": "TCP", 00:17:35.018 "adrfam": "IPv4", 00:17:35.018 "traddr": "10.0.0.2", 00:17:35.018 "trsvcid": "4420" 00:17:35.018 }, 00:17:35.018 "peer_address": { 00:17:35.018 "trtype": "TCP", 00:17:35.018 "adrfam": "IPv4", 00:17:35.018 "traddr": "10.0.0.1", 00:17:35.018 "trsvcid": "37174" 00:17:35.018 }, 00:17:35.018 "auth": { 00:17:35.018 "state": "completed", 00:17:35.018 "digest": "sha512", 00:17:35.018 "dhgroup": "ffdhe8192" 00:17:35.018 } 00:17:35.018 } 00:17:35.018 ]' 00:17:35.018 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:35.018 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:35.018 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:35.018 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:35.018 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:35.018 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:35.018 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:35.018 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:35.276 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:17:35.276 10:35:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:17:35.843 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.843 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.843 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:35.843 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:35.843 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.843 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:35.843 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:35.843 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:35.843 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:36.102 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 1 00:17:36.102 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:36.102 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:36.102 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:36.102 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key1 00:17:36.102 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:36.102 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.102 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.102 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.102 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.102 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.102 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.102 10:35:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.670 00:17:36.670 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:36.670 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:36.670 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.670 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.670 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.670 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:36.670 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.670 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:36.670 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:36.670 { 00:17:36.670 "cntlid": 139, 00:17:36.670 "qid": 0, 00:17:36.670 "state": "enabled", 00:17:36.670 "thread": "nvmf_tgt_poll_group_000", 00:17:36.670 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:36.670 "listen_address": { 00:17:36.670 "trtype": "TCP", 00:17:36.670 "adrfam": "IPv4", 00:17:36.670 "traddr": "10.0.0.2", 00:17:36.670 "trsvcid": "4420" 00:17:36.670 }, 00:17:36.670 "peer_address": { 00:17:36.670 "trtype": "TCP", 00:17:36.670 "adrfam": "IPv4", 00:17:36.670 "traddr": "10.0.0.1", 00:17:36.670 "trsvcid": "37206" 00:17:36.670 }, 00:17:36.670 "auth": { 00:17:36.670 "state": "completed", 00:17:36.670 "digest": "sha512", 00:17:36.670 "dhgroup": "ffdhe8192" 00:17:36.670 } 00:17:36.670 } 00:17:36.670 ]' 00:17:36.670 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:36.929 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:36.929 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:36.929 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.929 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:36.929 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.929 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.929 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:37.187 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:17:37.188 10:35:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: --dhchap-ctrl-secret DHHC-1:02:NDc2YTNlNTRiZDNhOGU0MDg0NjcxOGI2Y2M4MWI2Nzg4NzRlNzBiMzcxMWE4YmJlS8khQQ==: 00:17:37.754 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.754 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.754 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:37.754 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.754 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.754 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:37.754 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:37.754 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:37.754 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:37.754 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 2 00:17:37.754 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:37.754 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:37.754 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:37.754 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key2 00:17:37.754 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.755 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.755 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:37.755 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.013 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.013 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.013 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.013 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:38.272 00:17:38.272 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:38.272 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:38.272 10:35:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:38.563 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:38.563 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:38.563 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:38.563 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.563 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:38.563 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:38.563 { 00:17:38.563 "cntlid": 141, 00:17:38.563 "qid": 0, 00:17:38.563 "state": "enabled", 00:17:38.563 "thread": "nvmf_tgt_poll_group_000", 00:17:38.563 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:38.563 "listen_address": { 00:17:38.563 "trtype": "TCP", 00:17:38.563 "adrfam": "IPv4", 00:17:38.563 "traddr": "10.0.0.2", 00:17:38.563 "trsvcid": "4420" 00:17:38.563 }, 00:17:38.563 "peer_address": { 00:17:38.563 "trtype": "TCP", 00:17:38.563 "adrfam": "IPv4", 00:17:38.563 "traddr": "10.0.0.1", 00:17:38.563 "trsvcid": "37236" 00:17:38.563 }, 00:17:38.563 "auth": { 00:17:38.563 "state": "completed", 00:17:38.563 "digest": "sha512", 00:17:38.563 "dhgroup": "ffdhe8192" 00:17:38.563 } 00:17:38.563 } 00:17:38.563 ]' 00:17:38.563 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:38.563 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:38.563 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:38.563 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.563 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:38.821 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.821 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.821 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.821 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:17:38.821 10:35:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:01:YTMyYjdmMmVjOTUyMDljN2JhZjJjNzdiOGZmMjk0N2F+O248: 00:17:39.387 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:39.387 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:39.387 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:39.387 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.387 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.387 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.387 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@120 -- # for keyid in "${!keys[@]}" 00:17:39.387 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.387 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:17:39.645 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@123 -- # connect_authenticate sha512 ffdhe8192 3 00:17:39.645 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:39.645 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:39.645 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:39.645 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:39.645 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:39.645 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:39.645 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:39.645 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.645 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:39.645 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:39.645 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:39.645 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:40.209 00:17:40.209 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:40.209 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:40.209 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:40.467 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:40.467 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:40.467 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:40.467 10:35:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.467 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:40.467 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:40.467 { 00:17:40.467 "cntlid": 143, 00:17:40.467 "qid": 0, 00:17:40.467 "state": "enabled", 00:17:40.467 "thread": "nvmf_tgt_poll_group_000", 00:17:40.467 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:40.467 "listen_address": { 00:17:40.467 "trtype": "TCP", 00:17:40.467 "adrfam": "IPv4", 00:17:40.467 "traddr": "10.0.0.2", 00:17:40.467 "trsvcid": "4420" 00:17:40.467 }, 00:17:40.467 "peer_address": { 00:17:40.467 "trtype": "TCP", 00:17:40.467 "adrfam": "IPv4", 00:17:40.467 "traddr": "10.0.0.1", 00:17:40.467 "trsvcid": "53832" 00:17:40.467 }, 00:17:40.467 "auth": { 00:17:40.467 "state": "completed", 00:17:40.467 "digest": "sha512", 00:17:40.467 "dhgroup": "ffdhe8192" 00:17:40.467 } 00:17:40.467 } 00:17:40.467 ]' 00:17:40.467 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:40.467 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:40.467 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:40.467 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:40.467 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:40.467 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:40.467 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:40.467 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.725 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:17:40.725 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:17:41.290 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.290 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.290 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:41.290 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.290 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.290 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.290 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:41.290 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s sha256,sha384,sha512 00:17:41.290 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # IFS=, 00:17:41.290 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@130 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:41.290 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@129 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:41.290 10:35:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:41.549 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@141 -- # connect_authenticate sha512 ffdhe8192 0 00:17:41.549 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:41.549 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:41.549 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:41.549 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key0 00:17:41.549 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:41.549 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.549 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:41.549 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.549 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:41.549 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.549 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:41.549 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:42.116 00:17:42.116 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:42.116 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:42.116 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.116 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.116 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.116 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:42.116 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.116 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:42.116 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:42.116 { 00:17:42.116 "cntlid": 145, 00:17:42.116 "qid": 0, 00:17:42.116 "state": "enabled", 00:17:42.116 "thread": "nvmf_tgt_poll_group_000", 00:17:42.116 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:42.116 "listen_address": { 00:17:42.116 "trtype": "TCP", 00:17:42.116 "adrfam": "IPv4", 00:17:42.116 "traddr": "10.0.0.2", 00:17:42.116 "trsvcid": "4420" 00:17:42.116 }, 00:17:42.116 "peer_address": { 00:17:42.116 "trtype": "TCP", 00:17:42.116 "adrfam": "IPv4", 00:17:42.116 "traddr": "10.0.0.1", 00:17:42.116 "trsvcid": "53862" 00:17:42.116 }, 00:17:42.116 "auth": { 00:17:42.116 "state": "completed", 00:17:42.116 "digest": "sha512", 00:17:42.116 "dhgroup": "ffdhe8192" 00:17:42.116 } 00:17:42.116 } 00:17:42.116 ]' 00:17:42.116 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:42.374 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.374 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:42.375 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:42.375 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:42.375 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.375 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.375 10:35:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.633 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:17:42.633 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:00:OWFhZGI4OTY4NWM1ODdhOTliZGMyMDg3MjgzMWU4YTJmMWNjNDJjOTViZDJiMTU3fiNcYw==: --dhchap-ctrl-secret DHHC-1:03:MzY1OTk3ZjZlNmZlN2RiMDlkZDk0M2VhZGJkODdjNDY2NDNiY2JiZGFhZjRkNmY1NjllN2UwMzU3YTY5MjcxZl1KnAw=: 00:17:43.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.201 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:43.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@144 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:17:43.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@145 -- # NOT bdev_connect -b nvme0 --dhchap-key key2 00:17:43.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:43.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key2 00:17:43.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:43.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:43.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key2 00:17:43.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:43.201 10:35:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 00:17:43.460 request: 00:17:43.460 { 00:17:43.460 "name": "nvme0", 00:17:43.460 "trtype": "tcp", 00:17:43.460 "traddr": "10.0.0.2", 00:17:43.460 "adrfam": "ipv4", 00:17:43.460 "trsvcid": "4420", 00:17:43.460 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:43.460 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:43.460 "prchk_reftag": false, 00:17:43.460 "prchk_guard": false, 00:17:43.460 "hdgst": false, 00:17:43.460 "ddgst": false, 00:17:43.460 "dhchap_key": "key2", 00:17:43.460 "allow_unrecognized_csi": false, 00:17:43.460 "method": "bdev_nvme_attach_controller", 00:17:43.460 "req_id": 1 00:17:43.460 } 00:17:43.460 Got JSON-RPC error response 00:17:43.460 response: 00:17:43.460 { 00:17:43.460 "code": -5, 00:17:43.460 "message": "Input/output error" 00:17:43.460 } 00:17:43.460 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:43.460 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:43.460 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:43.460 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:43.460 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@146 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:43.460 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.460 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.719 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.719 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@149 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.719 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.719 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.719 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.719 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@150 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:43.719 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:43.719 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:43.719 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:43.719 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.719 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:43.719 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.719 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:43.719 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:43.719 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:17:43.978 request: 00:17:43.978 { 00:17:43.978 "name": "nvme0", 00:17:43.978 "trtype": "tcp", 00:17:43.978 "traddr": "10.0.0.2", 00:17:43.978 "adrfam": "ipv4", 00:17:43.978 "trsvcid": "4420", 00:17:43.978 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:43.978 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:43.978 "prchk_reftag": false, 00:17:43.978 "prchk_guard": false, 00:17:43.978 "hdgst": false, 00:17:43.978 "ddgst": false, 00:17:43.978 "dhchap_key": "key1", 00:17:43.978 "dhchap_ctrlr_key": "ckey2", 00:17:43.978 "allow_unrecognized_csi": false, 00:17:43.978 "method": "bdev_nvme_attach_controller", 00:17:43.978 "req_id": 1 00:17:43.978 } 00:17:43.978 Got JSON-RPC error response 00:17:43.978 response: 00:17:43.978 { 00:17:43.978 "code": -5, 00:17:43.978 "message": "Input/output error" 00:17:43.978 } 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@151 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@154 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@155 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:43.978 10:35:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:44.546 request: 00:17:44.546 { 00:17:44.546 "name": "nvme0", 00:17:44.546 "trtype": "tcp", 00:17:44.546 "traddr": "10.0.0.2", 00:17:44.546 "adrfam": "ipv4", 00:17:44.546 "trsvcid": "4420", 00:17:44.546 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:44.546 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:44.546 "prchk_reftag": false, 00:17:44.546 "prchk_guard": false, 00:17:44.546 "hdgst": false, 00:17:44.546 "ddgst": false, 00:17:44.546 "dhchap_key": "key1", 00:17:44.546 "dhchap_ctrlr_key": "ckey1", 00:17:44.546 "allow_unrecognized_csi": false, 00:17:44.546 "method": "bdev_nvme_attach_controller", 00:17:44.546 "req_id": 1 00:17:44.546 } 00:17:44.546 Got JSON-RPC error response 00:17:44.546 response: 00:17:44.546 { 00:17:44.546 "code": -5, 00:17:44.546 "message": "Input/output error" 00:17:44.546 } 00:17:44.546 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:44.546 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:44.546 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:44.546 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:44.546 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:44.546 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:44.546 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.546 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:44.546 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@159 -- # killprocess 3206062 00:17:44.546 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3206062 ']' 00:17:44.546 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3206062 00:17:44.546 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:17:44.546 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:44.546 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3206062 00:17:44.546 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:44.546 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:44.546 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3206062' 00:17:44.546 killing process with pid 3206062 00:17:44.546 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3206062 00:17:44.546 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3206062 00:17:44.805 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@160 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:17:44.805 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:17:44.805 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:17:44.805 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.805 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@328 -- # nvmfpid=3227722 00:17:44.805 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # waitforlisten 3227722 00:17:44.805 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:17:44.805 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3227722 ']' 00:17:44.805 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:44.805 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:44.805 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:44.805 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:44.805 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.064 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.064 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:45.064 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:17:45.064 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:45.064 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.064 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:17:45.064 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@161 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:17:45.064 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # waitforlisten 3227722 00:17:45.064 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@835 -- # '[' -z 3227722 ']' 00:17:45.064 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:45.064 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:45.064 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:45.064 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:45.064 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:45.064 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.064 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:45.064 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@868 -- # return 0 00:17:45.064 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # rpc_cmd 00:17:45.064 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.064 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.323 null0 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.554 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha512.Qot ]] 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.Qot 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.blh 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha384.axg ]] 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.axg 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.CZb 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n /tmp/spdk.key-sha256.RKP ]] 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.RKP 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.323 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.324 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@174 -- # for i in "${!keys[@]}" 00:17:45.324 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.gQ9 00:17:45.324 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.324 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.324 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.324 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # [[ -n '' ]] 00:17:45.324 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@179 -- # connect_authenticate sha512 ffdhe8192 3 00:17:45.324 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@65 -- # local digest dhgroup key ckey qpairs 00:17:45.324 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # digest=sha512 00:17:45.324 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # dhgroup=ffdhe8192 00:17:45.324 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # key=key3 00:17:45.324 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:45.324 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:45.324 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:45.324 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.324 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:45.324 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@71 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:45.324 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:45.324 10:35:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:46.259 nvme0n1 00:17:46.259 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # hostrpc bdev_nvme_get_controllers 00:17:46.259 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # jq -r '.[].name' 00:17:46.259 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.259 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.259 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.259 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:46.259 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.259 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:46.259 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # qpairs='[ 00:17:46.259 { 00:17:46.259 "cntlid": 1, 00:17:46.259 "qid": 0, 00:17:46.259 "state": "enabled", 00:17:46.259 "thread": "nvmf_tgt_poll_group_000", 00:17:46.259 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:46.259 "listen_address": { 00:17:46.259 "trtype": "TCP", 00:17:46.259 "adrfam": "IPv4", 00:17:46.259 "traddr": "10.0.0.2", 00:17:46.259 "trsvcid": "4420" 00:17:46.259 }, 00:17:46.259 "peer_address": { 00:17:46.259 "trtype": "TCP", 00:17:46.259 "adrfam": "IPv4", 00:17:46.259 "traddr": "10.0.0.1", 00:17:46.259 "trsvcid": "53912" 00:17:46.259 }, 00:17:46.259 "auth": { 00:17:46.259 "state": "completed", 00:17:46.259 "digest": "sha512", 00:17:46.259 "dhgroup": "ffdhe8192" 00:17:46.259 } 00:17:46.259 } 00:17:46.259 ]' 00:17:46.259 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # jq -r '.[0].auth.digest' 00:17:46.259 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@75 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:46.259 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # jq -r '.[0].auth.dhgroup' 00:17:46.518 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@76 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:46.518 10:35:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # jq -r '.[0].auth.state' 00:17:46.518 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@77 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:46.518 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@78 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:46.518 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:46.518 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@80 -- # nvme_connect --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:17:46.518 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:17:47.084 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.084 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:47.084 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.084 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.084 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.343 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@182 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key3 00:17:47.343 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:47.343 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.343 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:47.343 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@183 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:17:47.343 10:35:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:17:47.343 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@184 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:47.343 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:47.343 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:47.343 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:47.343 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.343 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:47.343 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.343 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:47.343 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.343 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.601 request: 00:17:47.601 { 00:17:47.601 "name": "nvme0", 00:17:47.601 "trtype": "tcp", 00:17:47.601 "traddr": "10.0.0.2", 00:17:47.601 "adrfam": "ipv4", 00:17:47.601 "trsvcid": "4420", 00:17:47.601 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:47.601 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:47.601 "prchk_reftag": false, 00:17:47.601 "prchk_guard": false, 00:17:47.601 "hdgst": false, 00:17:47.601 "ddgst": false, 00:17:47.601 "dhchap_key": "key3", 00:17:47.601 "allow_unrecognized_csi": false, 00:17:47.601 "method": "bdev_nvme_attach_controller", 00:17:47.601 "req_id": 1 00:17:47.601 } 00:17:47.601 Got JSON-RPC error response 00:17:47.601 response: 00:17:47.601 { 00:17:47.601 "code": -5, 00:17:47.601 "message": "Input/output error" 00:17:47.601 } 00:17:47.601 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:47.601 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:47.601 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:47.601 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:47.601 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # IFS=, 00:17:47.601 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # printf %s sha256,sha384,sha512 00:17:47.601 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:47.601 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:17:47.860 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@193 -- # NOT bdev_connect -b nvme0 --dhchap-key key3 00:17:47.860 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:47.860 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key3 00:17:47.860 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:47.860 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.860 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:47.860 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:47.860 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key3 00:17:47.860 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:47.860 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key3 00:17:48.119 request: 00:17:48.119 { 00:17:48.119 "name": "nvme0", 00:17:48.119 "trtype": "tcp", 00:17:48.119 "traddr": "10.0.0.2", 00:17:48.119 "adrfam": "ipv4", 00:17:48.119 "trsvcid": "4420", 00:17:48.119 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:48.119 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:48.119 "prchk_reftag": false, 00:17:48.119 "prchk_guard": false, 00:17:48.119 "hdgst": false, 00:17:48.119 "ddgst": false, 00:17:48.119 "dhchap_key": "key3", 00:17:48.119 "allow_unrecognized_csi": false, 00:17:48.119 "method": "bdev_nvme_attach_controller", 00:17:48.119 "req_id": 1 00:17:48.119 } 00:17:48.119 Got JSON-RPC error response 00:17:48.119 response: 00:17:48.119 { 00:17:48.119 "code": -5, 00:17:48.119 "message": "Input/output error" 00:17:48.119 } 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s sha256,sha384,sha512 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # IFS=, 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@197 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@208 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@209 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@210 -- # NOT bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.119 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:48.377 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:48.377 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:48.377 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:48.377 10:35:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:48.634 request: 00:17:48.634 { 00:17:48.634 "name": "nvme0", 00:17:48.634 "trtype": "tcp", 00:17:48.634 "traddr": "10.0.0.2", 00:17:48.634 "adrfam": "ipv4", 00:17:48.634 "trsvcid": "4420", 00:17:48.634 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:48.634 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:48.634 "prchk_reftag": false, 00:17:48.634 "prchk_guard": false, 00:17:48.634 "hdgst": false, 00:17:48.634 "ddgst": false, 00:17:48.634 "dhchap_key": "key0", 00:17:48.634 "dhchap_ctrlr_key": "key1", 00:17:48.634 "allow_unrecognized_csi": false, 00:17:48.634 "method": "bdev_nvme_attach_controller", 00:17:48.634 "req_id": 1 00:17:48.634 } 00:17:48.634 Got JSON-RPC error response 00:17:48.634 response: 00:17:48.634 { 00:17:48.634 "code": -5, 00:17:48.634 "message": "Input/output error" 00:17:48.634 } 00:17:48.634 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:48.634 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:48.634 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:48.634 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:48.634 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@213 -- # bdev_connect -b nvme0 --dhchap-key key0 00:17:48.634 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:48.634 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 00:17:48.892 nvme0n1 00:17:48.892 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # hostrpc bdev_nvme_get_controllers 00:17:48.892 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # jq -r '.[].name' 00:17:48.892 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.150 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@214 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.150 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@215 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:49.150 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:49.150 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@218 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 00:17:49.150 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:49.150 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.407 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:49.407 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@219 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:49.407 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:49.407 10:35:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:49.973 nvme0n1 00:17:49.973 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # hostrpc bdev_nvme_get_controllers 00:17:49.973 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # jq -r '.[].name' 00:17:49.973 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.231 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@220 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.231 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@222 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:50.231 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:50.231 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.231 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:50.231 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # jq -r '.[].name' 00:17:50.231 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # hostrpc bdev_nvme_get_controllers 00:17:50.231 10:35:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:50.490 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@223 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:50.490 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@225 -- # nvme_connect --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:17:50.490 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid 00ad29c2-ccbd-e911-906e-0017a4403562 -l 0 --dhchap-secret DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: --dhchap-ctrl-secret DHHC-1:03:YzI3NGM5Y2NhYjYwNjVjMTA5NTVlZGZlOGVkYzFmOTVkYmM5Njg0ZDFiOGQ0NjZhOGYwYzliOTIyYTA2ZDk5MUmBzpc=: 00:17:51.056 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nvme_get_ctrlr 00:17:51.056 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@41 -- # local dev 00:17:51.056 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@43 -- # for dev in /sys/devices/virtual/nvme-fabrics/ctl/nvme* 00:17:51.056 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nqn.2024-03.io.spdk:cnode0 == \n\q\n\.\2\0\2\4\-\0\3\.\i\o\.\s\p\d\k\:\c\n\o\d\e\0 ]] 00:17:51.056 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # echo nvme0 00:17:51.057 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # break 00:17:51.057 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@226 -- # nctrlr=nvme0 00:17:51.057 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@227 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.057 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.315 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@228 -- # NOT bdev_connect -b nvme0 --dhchap-key key1 00:17:51.315 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:51.315 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg bdev_connect -b nvme0 --dhchap-key key1 00:17:51.315 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=bdev_connect 00:17:51.315 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.315 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t bdev_connect 00:17:51.315 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:51.315 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # bdev_connect -b nvme0 --dhchap-key key1 00:17:51.315 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:51.315 10:35:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key1 00:17:51.574 request: 00:17:51.574 { 00:17:51.574 "name": "nvme0", 00:17:51.574 "trtype": "tcp", 00:17:51.574 "traddr": "10.0.0.2", 00:17:51.574 "adrfam": "ipv4", 00:17:51.574 "trsvcid": "4420", 00:17:51.574 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:17:51.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562", 00:17:51.574 "prchk_reftag": false, 00:17:51.574 "prchk_guard": false, 00:17:51.574 "hdgst": false, 00:17:51.574 "ddgst": false, 00:17:51.574 "dhchap_key": "key1", 00:17:51.574 "allow_unrecognized_csi": false, 00:17:51.574 "method": "bdev_nvme_attach_controller", 00:17:51.574 "req_id": 1 00:17:51.574 } 00:17:51.574 Got JSON-RPC error response 00:17:51.574 response: 00:17:51.574 { 00:17:51.574 "code": -5, 00:17:51.574 "message": "Input/output error" 00:17:51.574 } 00:17:51.574 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:51.574 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:51.574 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:51.574 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:51.574 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@229 -- # bdev_connect -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:51.574 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:51.574 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:52.509 nvme0n1 00:17:52.509 10:35:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # hostrpc bdev_nvme_get_controllers 00:17:52.509 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # jq -r '.[].name' 00:17:52.509 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.509 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@230 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.509 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@231 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.509 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:52.768 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@233 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:52.768 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:52.768 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.768 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:52.768 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@234 -- # bdev_connect -b nvme0 00:17:52.768 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:52.768 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 00:17:53.026 nvme0n1 00:17:53.026 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # hostrpc bdev_nvme_get_controllers 00:17:53.026 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # jq -r '.[].name' 00:17:53.026 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:53.284 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@235 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:53.284 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@236 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:53.284 10:35:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.542 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@239 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:53.542 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:53.542 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.542 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:53.542 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@240 -- # nvme_set_keys nvme0 DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: '' 2s 00:17:53.542 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:53.542 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:53.542 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key=DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: 00:17:53.542 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey= 00:17:53.542 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:53.542 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:53.542 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: ]] 00:17:53.542 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # echo DHHC-1:01:NGUzODIwZjJiMzlmOWYwNTYyMjAwN2E5OGQxMDEyYWZ/Ynpv: 00:17:53.542 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z '' ]] 00:17:53.543 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:53.543 10:35:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:55.442 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@241 -- # waitforblk nvme0n1 00:17:55.442 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:55.442 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:55.442 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:55.442 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:55.442 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:55.442 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:55.442 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@243 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key key2 00:17:55.442 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:55.442 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.443 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:55.443 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@244 -- # nvme_set_keys nvme0 '' DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: 2s 00:17:55.443 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # local ctl key ckey dev timeout 00:17:55.443 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ctl=nvme0 00:17:55.443 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # key= 00:17:55.443 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # ckey=DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: 00:17:55.443 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@51 -- # timeout=2s 00:17:55.443 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # dev=/sys/devices/virtual/nvme-fabrics/ctl/nvme0 00:17:55.443 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@54 -- # [[ -z '' ]] 00:17:55.443 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # [[ -z DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: ]] 00:17:55.443 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # echo DHHC-1:02:MzY0Yzg0ZmE5NGFhMTMyOGEzM2JkYjQ5OTJhZGRmMDY0NWY4MGM5YWJiN2RhNTc2fy2xVQ==: 00:17:55.443 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # [[ -z 2s ]] 00:17:55.443 10:35:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # sleep 2s 00:17:57.974 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@245 -- # waitforblk nvme0n1 00:17:57.974 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1239 -- # local i=0 00:17:57.974 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:17:57.974 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:17:57.974 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:17:57.974 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:17:57.974 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1250 -- # return 0 00:17:57.974 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@246 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:57.974 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:57.974 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@249 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:17:57.974 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:57.974 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.974 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:57.974 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@250 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:57.974 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:57.974 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:17:58.291 nvme0n1 00:17:58.291 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@252 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:58.291 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:58.291 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.291 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:58.291 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@253 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:58.291 10:35:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:58.858 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # jq -r '.[].name' 00:17:58.858 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # hostrpc bdev_nvme_get_controllers 00:17:58.858 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.117 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@254 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.117 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@256 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:17:59.117 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.117 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.117 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.117 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@257 -- # hostrpc bdev_nvme_set_keys nvme0 00:17:59.117 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 00:17:59.117 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # hostrpc bdev_nvme_get_controllers 00:17:59.118 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:59.118 10:35:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # jq -r '.[].name' 00:17:59.375 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@258 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:59.375 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@260 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:17:59.376 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:59.376 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.376 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:59.376 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@261 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:59.376 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:17:59.376 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:59.376 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:17:59.376 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:59.376 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:17:59.376 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:17:59.376 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:59.376 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key key3 00:17:59.942 request: 00:17:59.942 { 00:17:59.942 "name": "nvme0", 00:17:59.942 "dhchap_key": "key1", 00:17:59.942 "dhchap_ctrlr_key": "key3", 00:17:59.942 "method": "bdev_nvme_set_keys", 00:17:59.942 "req_id": 1 00:17:59.942 } 00:17:59.942 Got JSON-RPC error response 00:17:59.942 response: 00:17:59.942 { 00:17:59.942 "code": -13, 00:17:59.942 "message": "Permission denied" 00:17:59.942 } 00:17:59.942 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:17:59.942 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:17:59.942 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:17:59.942 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:17:59.942 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:17:59.942 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:17:59.942 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.200 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 1 != 0 )) 00:18:00.200 10:35:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@263 -- # sleep 1s 00:18:01.226 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # hostrpc bdev_nvme_get_controllers 00:18:01.226 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # jq length 00:18:01.226 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.226 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@262 -- # (( 0 != 0 )) 00:18:01.226 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@267 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:01.226 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:01.226 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.226 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:01.226 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@268 -- # bdev_connect -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:01.226 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # hostrpc bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:01.226 10:35:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 -b nvme0 --dhchap-key key0 --dhchap-ctrlr-key key1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:18:02.168 nvme0n1 00:18:02.168 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@270 -- # rpc_cmd nvmf_subsystem_set_keys nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key key3 00:18:02.168 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:02.168 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.168 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:02.168 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@271 -- # NOT hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:02.168 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@652 -- # local es=0 00:18:02.168 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@654 -- # valid_exec_arg hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:02.168 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # local arg=hostrpc 00:18:02.168 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:02.168 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # type -t hostrpc 00:18:02.168 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:02.169 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # hostrpc bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:02.169 10:35:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key key0 00:18:02.427 request: 00:18:02.427 { 00:18:02.427 "name": "nvme0", 00:18:02.427 "dhchap_key": "key2", 00:18:02.427 "dhchap_ctrlr_key": "key0", 00:18:02.427 "method": "bdev_nvme_set_keys", 00:18:02.427 "req_id": 1 00:18:02.427 } 00:18:02.427 Got JSON-RPC error response 00:18:02.427 response: 00:18:02.427 { 00:18:02.427 "code": -13, 00:18:02.427 "message": "Permission denied" 00:18:02.427 } 00:18:02.685 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@655 -- # es=1 00:18:02.685 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:02.685 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:02.685 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:02.685 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:02.685 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:02.685 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:02.685 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 1 != 0 )) 00:18:02.685 10:35:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@273 -- # sleep 1s 00:18:04.061 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # hostrpc bdev_nvme_get_controllers 00:18:04.061 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # jq length 00:18:04.061 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.061 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@272 -- # (( 0 != 0 )) 00:18:04.061 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@276 -- # trap - SIGINT SIGTERM EXIT 00:18:04.061 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@277 -- # cleanup 00:18:04.061 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 3206189 00:18:04.061 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3206189 ']' 00:18:04.061 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3206189 00:18:04.061 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:04.061 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.061 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3206189 00:18:04.061 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:04.062 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:04.062 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3206189' 00:18:04.062 killing process with pid 3206189 00:18:04.062 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3206189 00:18:04.062 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3206189 00:18:04.320 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:04.320 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:18:04.320 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@99 -- # sync 00:18:04.320 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:18:04.320 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@102 -- # set +e 00:18:04.320 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:18:04.320 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:18:04.320 rmmod nvme_tcp 00:18:04.320 rmmod nvme_fabrics 00:18:04.320 rmmod nvme_keyring 00:18:04.320 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:18:04.320 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@106 -- # set -e 00:18:04.320 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@107 -- # return 0 00:18:04.320 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@336 -- # '[' -n 3227722 ']' 00:18:04.320 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@337 -- # killprocess 3227722 00:18:04.320 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # '[' -z 3227722 ']' 00:18:04.320 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # kill -0 3227722 00:18:04.320 10:35:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # uname 00:18:04.320 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:04.320 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3227722 00:18:04.579 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:18:04.579 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:18:04.579 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3227722' 00:18:04.579 killing process with pid 3227722 00:18:04.579 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@973 -- # kill 3227722 00:18:04.579 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@978 -- # wait 3227722 00:18:04.579 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:18:04.579 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # nvmf_fini 00:18:04.579 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@264 -- # local dev 00:18:04.579 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:18:04.579 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:04.579 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:04.579 10:35:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@130 -- # return 0 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # _dev=0 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@41 -- # dev_map=() 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/setup.sh@284 -- # iptr 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@542 -- # iptables-save 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@542 -- # iptables-restore 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.554 /tmp/spdk.key-sha256.blh /tmp/spdk.key-sha384.CZb /tmp/spdk.key-sha512.gQ9 /tmp/spdk.key-sha512.Qot /tmp/spdk.key-sha384.axg /tmp/spdk.key-sha256.RKP '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:07.115 00:18:07.115 real 2m31.527s 00:18:07.115 user 5m49.129s 00:18:07.115 sys 0m24.159s 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.115 ************************************ 00:18:07.115 END TEST nvmf_auth_target 00:18:07.115 ************************************ 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:07.115 ************************************ 00:18:07.115 START TEST nvmf_bdevio_no_huge 00:18:07.115 ************************************ 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:07.115 * Looking for test storage... 00:18:07.115 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lcov --version 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # IFS=.-: 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@336 -- # read -ra ver1 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # IFS=.-: 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@337 -- # read -ra ver2 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@338 -- # local 'op=<' 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@340 -- # ver1_l=2 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@341 -- # ver2_l=1 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@344 -- # case "$op" in 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@345 -- # : 1 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # decimal 1 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=1 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 1 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@365 -- # ver1[v]=1 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # decimal 2 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@353 -- # local d=2 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@355 -- # echo 2 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@366 -- # ver2[v]=2 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@368 -- # return 0 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:07.115 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:07.115 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.115 --rc genhtml_branch_coverage=1 00:18:07.115 --rc genhtml_function_coverage=1 00:18:07.116 --rc genhtml_legend=1 00:18:07.116 --rc geninfo_all_blocks=1 00:18:07.116 --rc geninfo_unexecuted_blocks=1 00:18:07.116 00:18:07.116 ' 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:07.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.116 --rc genhtml_branch_coverage=1 00:18:07.116 --rc genhtml_function_coverage=1 00:18:07.116 --rc genhtml_legend=1 00:18:07.116 --rc geninfo_all_blocks=1 00:18:07.116 --rc geninfo_unexecuted_blocks=1 00:18:07.116 00:18:07.116 ' 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:07.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.116 --rc genhtml_branch_coverage=1 00:18:07.116 --rc genhtml_function_coverage=1 00:18:07.116 --rc genhtml_legend=1 00:18:07.116 --rc geninfo_all_blocks=1 00:18:07.116 --rc geninfo_unexecuted_blocks=1 00:18:07.116 00:18:07.116 ' 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:07.116 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:07.116 --rc genhtml_branch_coverage=1 00:18:07.116 --rc genhtml_function_coverage=1 00:18:07.116 --rc genhtml_legend=1 00:18:07.116 --rc geninfo_all_blocks=1 00:18:07.116 --rc geninfo_unexecuted_blocks=1 00:18:07.116 00:18:07.116 ' 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@15 -- # shopt -s extglob 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@50 -- # : 0 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:18:07.116 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@54 -- # have_pci_nics=0 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # prepare_net_devs 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # local -g is_hw=no 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # remove_target_ns 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # xtrace_disable 00:18:07.116 10:35:47 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@131 -- # pci_devs=() 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@131 -- # local -a pci_devs 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@132 -- # pci_net_devs=() 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@133 -- # pci_drivers=() 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@133 -- # local -A pci_drivers 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@135 -- # net_devs=() 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@135 -- # local -ga net_devs 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@136 -- # e810=() 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@136 -- # local -ga e810 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@137 -- # x722=() 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@137 -- # local -ga x722 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@138 -- # mlx=() 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@138 -- # local -ga mlx 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:13.723 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:13.723 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:13.723 Found net devices under 0000:86:00.0: cvl_0_0 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:13.723 Found net devices under 0000:86:00.1: cvl_0_1 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@262 -- # is_hw=yes 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@257 -- # create_target_ns 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@27 -- # local -gA dev_map 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@28 -- # local -g _dev 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # ips=() 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:18:13.723 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772161 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:18:13.724 10.0.0.1 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@11 -- # local val=167772162 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:18:13.724 10.0.0.2 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@38 -- # ping_ips 1 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=initiator0 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:18:13.724 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:13.724 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:18:13.724 00:18:13.724 --- 10.0.0.1 ping statistics --- 00:18:13.724 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.724 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=target0 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:18:13.724 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:18:13.725 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:13.725 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.081 ms 00:18:13.725 00:18:13.725 --- 10.0.0.2 ping statistics --- 00:18:13.725 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:13.725 rtt min/avg/max/mdev = 0.081/0.081/0.081/0.000 ms 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair++ )) 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # return 0 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=initiator0 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=initiator1 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # return 1 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev= 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@169 -- # return 0 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=target0 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # get_net_dev target1 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@107 -- # local dev=target1 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@109 -- # return 1 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@168 -- # dev= 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@169 -- # return 0 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@328 -- # nvmfpid=3234696 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # waitforlisten 3234696 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@835 -- # '[' -z 3234696 ']' 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:13.725 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:13.725 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.725 [2024-11-20 10:35:53.689193] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:13.725 [2024-11-20 10:35:53.689240] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:13.725 [2024-11-20 10:35:53.772658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:13.725 [2024-11-20 10:35:53.818608] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:13.725 [2024-11-20 10:35:53.818643] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:13.725 [2024-11-20 10:35:53.818650] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:13.725 [2024-11-20 10:35:53.818655] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:13.725 [2024-11-20 10:35:53.818660] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:13.725 [2024-11-20 10:35:53.822218] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:18:13.725 [2024-11-20 10:35:53.822307] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:18:13.725 [2024-11-20 10:35:53.822417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:18:13.726 [2024-11-20 10:35:53.822417] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@868 -- # return 0 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.726 [2024-11-20 10:35:53.961591] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.726 Malloc0 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.726 10:35:53 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.726 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.726 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:13.726 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:13.726 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:13.726 [2024-11-20 10:35:54.005886] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:13.726 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:13.726 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:13.726 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:13.726 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # config=() 00:18:13.726 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # local subsystem config 00:18:13.726 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:18:13.726 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:18:13.726 { 00:18:13.726 "params": { 00:18:13.726 "name": "Nvme$subsystem", 00:18:13.726 "trtype": "$TEST_TRANSPORT", 00:18:13.726 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:13.726 "adrfam": "ipv4", 00:18:13.726 "trsvcid": "$NVMF_PORT", 00:18:13.726 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:13.726 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:13.726 "hdgst": ${hdgst:-false}, 00:18:13.726 "ddgst": ${ddgst:-false} 00:18:13.726 }, 00:18:13.726 "method": "bdev_nvme_attach_controller" 00:18:13.726 } 00:18:13.726 EOF 00:18:13.726 )") 00:18:13.726 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # cat 00:18:13.726 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@396 -- # jq . 00:18:13.726 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@397 -- # IFS=, 00:18:13.726 10:35:54 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:18:13.726 "params": { 00:18:13.726 "name": "Nvme1", 00:18:13.726 "trtype": "tcp", 00:18:13.726 "traddr": "10.0.0.2", 00:18:13.726 "adrfam": "ipv4", 00:18:13.726 "trsvcid": "4420", 00:18:13.726 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:13.726 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:13.726 "hdgst": false, 00:18:13.726 "ddgst": false 00:18:13.726 }, 00:18:13.726 "method": "bdev_nvme_attach_controller" 00:18:13.726 }' 00:18:13.726 [2024-11-20 10:35:54.057885] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:13.726 [2024-11-20 10:35:54.057930] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid3234727 ] 00:18:13.726 [2024-11-20 10:35:54.136979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:13.726 [2024-11-20 10:35:54.185193] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:13.726 [2024-11-20 10:35:54.185302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:13.726 [2024-11-20 10:35:54.185302] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:18:13.984 I/O targets: 00:18:13.984 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:13.984 00:18:13.984 00:18:13.984 CUnit - A unit testing framework for C - Version 2.1-3 00:18:13.984 http://cunit.sourceforge.net/ 00:18:13.984 00:18:13.984 00:18:13.984 Suite: bdevio tests on: Nvme1n1 00:18:13.984 Test: blockdev write read block ...passed 00:18:13.984 Test: blockdev write zeroes read block ...passed 00:18:13.984 Test: blockdev write zeroes read no split ...passed 00:18:13.984 Test: blockdev write zeroes read split ...passed 00:18:13.984 Test: blockdev write zeroes read split partial ...passed 00:18:13.984 Test: blockdev reset ...[2024-11-20 10:35:54.639835] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:18:13.984 [2024-11-20 10:35:54.639899] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x164c920 (9): Bad file descriptor 00:18:13.984 [2024-11-20 10:35:54.694237] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:18:13.984 passed 00:18:14.241 Test: blockdev write read 8 blocks ...passed 00:18:14.241 Test: blockdev write read size > 128k ...passed 00:18:14.241 Test: blockdev write read invalid size ...passed 00:18:14.241 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:14.241 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:14.241 Test: blockdev write read max offset ...passed 00:18:14.241 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:14.241 Test: blockdev writev readv 8 blocks ...passed 00:18:14.241 Test: blockdev writev readv 30 x 1block ...passed 00:18:14.500 Test: blockdev writev readv block ...passed 00:18:14.500 Test: blockdev writev readv size > 128k ...passed 00:18:14.500 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:14.500 Test: blockdev comparev and writev ...[2024-11-20 10:35:55.028065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:14.500 [2024-11-20 10:35:55.028095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:14.500 [2024-11-20 10:35:55.028109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:14.500 [2024-11-20 10:35:55.028118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:14.500 [2024-11-20 10:35:55.028364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:14.500 [2024-11-20 10:35:55.028375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:14.500 [2024-11-20 10:35:55.028386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:14.500 [2024-11-20 10:35:55.028393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:14.500 [2024-11-20 10:35:55.028622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:14.500 [2024-11-20 10:35:55.028632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:14.500 [2024-11-20 10:35:55.028644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:14.500 [2024-11-20 10:35:55.028651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:14.500 [2024-11-20 10:35:55.028886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:14.500 [2024-11-20 10:35:55.028897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:14.500 [2024-11-20 10:35:55.028908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:14.500 [2024-11-20 10:35:55.028917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:14.500 passed 00:18:14.500 Test: blockdev nvme passthru rw ...passed 00:18:14.500 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:35:55.110573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:14.500 [2024-11-20 10:35:55.110590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:14.500 [2024-11-20 10:35:55.110698] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:14.500 [2024-11-20 10:35:55.110714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:14.500 [2024-11-20 10:35:55.110814] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:14.500 [2024-11-20 10:35:55.110823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:14.500 [2024-11-20 10:35:55.110917] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:14.500 [2024-11-20 10:35:55.110927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:14.500 passed 00:18:14.500 Test: blockdev nvme admin passthru ...passed 00:18:14.500 Test: blockdev copy ...passed 00:18:14.500 00:18:14.500 Run Summary: Type Total Ran Passed Failed Inactive 00:18:14.500 suites 1 1 n/a 0 0 00:18:14.500 tests 23 23 23 0 0 00:18:14.500 asserts 152 152 152 0 n/a 00:18:14.500 00:18:14.500 Elapsed time = 1.385 seconds 00:18:14.759 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:14.759 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@563 -- # xtrace_disable 00:18:14.759 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:14.759 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:18:14.759 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:14.759 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:14.759 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # nvmfcleanup 00:18:14.759 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@99 -- # sync 00:18:14.759 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:18:14.759 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@102 -- # set +e 00:18:14.759 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@103 -- # for i in {1..20} 00:18:14.759 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:18:14.759 rmmod nvme_tcp 00:18:14.759 rmmod nvme_fabrics 00:18:14.759 rmmod nvme_keyring 00:18:15.017 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:18:15.017 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@106 -- # set -e 00:18:15.017 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@107 -- # return 0 00:18:15.017 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@336 -- # '[' -n 3234696 ']' 00:18:15.017 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@337 -- # killprocess 3234696 00:18:15.017 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # '[' -z 3234696 ']' 00:18:15.017 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # kill -0 3234696 00:18:15.017 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # uname 00:18:15.017 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:15.017 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3234696 00:18:15.017 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:18:15.017 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:18:15.017 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3234696' 00:18:15.017 killing process with pid 3234696 00:18:15.017 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@973 -- # kill 3234696 00:18:15.017 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@978 -- # wait 3234696 00:18:15.275 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:18:15.275 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # nvmf_fini 00:18:15.275 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@264 -- # local dev 00:18:15.275 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@267 -- # remove_target_ns 00:18:15.275 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:15.275 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:15.275 10:35:55 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@268 -- # delete_main_bridge 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@130 -- # return 0 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # _dev=0 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@41 -- # dev_map=() 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/setup.sh@284 -- # iptr 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@542 -- # iptables-save 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@542 -- # iptables-restore 00:18:17.812 00:18:17.812 real 0m10.567s 00:18:17.812 user 0m12.288s 00:18:17.812 sys 0m5.404s 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1130 -- # xtrace_disable 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:17.812 ************************************ 00:18:17.812 END TEST nvmf_bdevio_no_huge 00:18:17.812 ************************************ 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:18:17.812 10:35:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:17.812 ************************************ 00:18:17.812 START TEST nvmf_tls 00:18:17.812 ************************************ 00:18:17.812 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:17.812 * Looking for test storage... 00:18:17.812 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:17.812 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:18:17.812 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lcov --version 00:18:17.812 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:18:17.812 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:18:17.812 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:18:17.812 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@333 -- # local ver1 ver1_l 00:18:17.812 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@334 -- # local ver2 ver2_l 00:18:17.812 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # IFS=.-: 00:18:17.812 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@336 -- # read -ra ver1 00:18:17.812 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # IFS=.-: 00:18:17.812 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@337 -- # read -ra ver2 00:18:17.812 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@338 -- # local 'op=<' 00:18:17.812 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@340 -- # ver1_l=2 00:18:17.812 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@341 -- # ver2_l=1 00:18:17.812 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:18:17.812 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@344 -- # case "$op" in 00:18:17.812 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@345 -- # : 1 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v = 0 )) 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # decimal 1 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=1 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 1 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@365 -- # ver1[v]=1 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # decimal 2 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@353 -- # local d=2 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@355 -- # echo 2 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@366 -- # ver2[v]=2 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@368 -- # return 0 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:18:17.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.813 --rc genhtml_branch_coverage=1 00:18:17.813 --rc genhtml_function_coverage=1 00:18:17.813 --rc genhtml_legend=1 00:18:17.813 --rc geninfo_all_blocks=1 00:18:17.813 --rc geninfo_unexecuted_blocks=1 00:18:17.813 00:18:17.813 ' 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:18:17.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.813 --rc genhtml_branch_coverage=1 00:18:17.813 --rc genhtml_function_coverage=1 00:18:17.813 --rc genhtml_legend=1 00:18:17.813 --rc geninfo_all_blocks=1 00:18:17.813 --rc geninfo_unexecuted_blocks=1 00:18:17.813 00:18:17.813 ' 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:18:17.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.813 --rc genhtml_branch_coverage=1 00:18:17.813 --rc genhtml_function_coverage=1 00:18:17.813 --rc genhtml_legend=1 00:18:17.813 --rc geninfo_all_blocks=1 00:18:17.813 --rc geninfo_unexecuted_blocks=1 00:18:17.813 00:18:17.813 ' 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:18:17.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:18:17.813 --rc genhtml_branch_coverage=1 00:18:17.813 --rc genhtml_function_coverage=1 00:18:17.813 --rc genhtml_legend=1 00:18:17.813 --rc geninfo_all_blocks=1 00:18:17.813 --rc geninfo_unexecuted_blocks=1 00:18:17.813 00:18:17.813 ' 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@15 -- # shopt -s extglob 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@50 -- # : 0 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:18:17.813 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@54 -- # have_pci_nics=0 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmftestinit 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # prepare_net_devs 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # local -g is_hw=no 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # remove_target_ns 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:18:17.813 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:18:17.814 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:18:17.814 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # xtrace_disable 00:18:17.814 10:35:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@131 -- # pci_devs=() 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@131 -- # local -a pci_devs 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@132 -- # pci_net_devs=() 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@133 -- # pci_drivers=() 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@133 -- # local -A pci_drivers 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@135 -- # net_devs=() 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@135 -- # local -ga net_devs 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@136 -- # e810=() 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@136 -- # local -ga e810 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@137 -- # x722=() 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@137 -- # local -ga x722 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@138 -- # mlx=() 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@138 -- # local -ga mlx 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:24.381 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:24.381 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:24.381 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:24.382 Found net devices under 0000:86:00.0: cvl_0_0 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # [[ up == up ]] 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:24.382 Found net devices under 0000:86:00.1: cvl_0_1 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@262 -- # is_hw=yes 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@257 -- # create_target_ns 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@27 -- # local -gA dev_map 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@28 -- # local -g _dev 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # ips=() 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:18:24.382 10:36:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772161 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:18:24.382 10.0.0.1 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@11 -- # local val=167772162 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:18:24.382 10.0.0.2 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@38 -- # ping_ips 1 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:18:24.382 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=initiator0 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:18:24.383 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:24.383 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.368 ms 00:18:24.383 00:18:24.383 --- 10.0.0.1 ping statistics --- 00:18:24.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.383 rtt min/avg/max/mdev = 0.368/0.368/0.368/0.000 ms 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=target0 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:18:24.383 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:24.383 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:18:24.383 00:18:24.383 --- 10.0.0.2 ping statistics --- 00:18:24.383 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:24.383 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair++ )) 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # return 0 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=initiator0 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=initiator1 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # return 1 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev= 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@169 -- # return 0 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev target0 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=target0 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:18:24.383 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # get_net_dev target1 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@107 -- # local dev=target1 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@109 -- # return 1 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@168 -- # dev= 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@169 -- # return 0 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@64 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=3238638 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 3238638 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3238638 ']' 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:24.384 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.384 [2024-11-20 10:36:04.388831] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:24.384 [2024-11-20 10:36:04.388873] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:24.384 [2024-11-20 10:36:04.469651] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:24.384 [2024-11-20 10:36:04.510324] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:24.384 [2024-11-20 10:36:04.510357] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:24.384 [2024-11-20 10:36:04.510365] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:24.384 [2024-11-20 10:36:04.510371] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:24.384 [2024-11-20 10:36:04.510376] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:24.384 [2024-11-20 10:36:04.510916] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:24.384 true 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # jq -r .tls_version 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@69 -- # version=0 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # [[ 0 != \0 ]] 00:18:24.384 10:36:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:24.643 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:24.643 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # jq -r .tls_version 00:18:24.643 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@77 -- # version=13 00:18:24.643 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@78 -- # [[ 13 != \1\3 ]] 00:18:24.643 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:24.901 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:24.901 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # jq -r .tls_version 00:18:25.160 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@85 -- # version=7 00:18:25.160 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@86 -- # [[ 7 != \7 ]] 00:18:25.160 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:25.160 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # jq -r .enable_ktls 00:18:25.160 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@92 -- # ktls=false 00:18:25.160 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@93 -- # [[ false != \f\a\l\s\e ]] 00:18:25.160 10:36:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:25.418 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:25.418 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # jq -r .enable_ktls 00:18:25.676 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@100 -- # ktls=true 00:18:25.676 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@101 -- # [[ true != \t\r\u\e ]] 00:18:25.676 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@107 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:25.935 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:25.935 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # jq -r .enable_ktls 00:18:25.935 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@108 -- # ktls=false 00:18:25.935 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@109 -- # [[ false != \f\a\l\s\e ]] 00:18:25.935 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:25.935 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:25.935 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:18:25.935 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:18:25.935 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:18:25.935 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:18:25.935 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:18:26.194 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@114 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:26.194 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@115 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:26.194 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:26.194 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:18:26.194 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:18:26.194 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=ffeeddccbbaa99887766554433221100 00:18:26.194 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=1 00:18:26.194 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:18:26.194 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@115 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:26.194 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@117 -- # mktemp 00:18:26.194 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@117 -- # key_path=/tmp/tmp.h0hCpaiwiV 00:18:26.194 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # mktemp 00:18:26.194 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key_2_path=/tmp/tmp.4RZQ7RZ1vO 00:18:26.194 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@120 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:26.194 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:26.194 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@123 -- # chmod 0600 /tmp/tmp.h0hCpaiwiV 00:18:26.194 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # chmod 0600 /tmp/tmp.4RZQ7RZ1vO 00:18:26.194 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:26.194 10:36:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:26.453 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@129 -- # setup_nvmf_tgt /tmp/tmp.h0hCpaiwiV 00:18:26.453 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.h0hCpaiwiV 00:18:26.453 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:26.712 [2024-11-20 10:36:07.327314] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:26.712 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:26.971 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:26.971 [2024-11-20 10:36:07.688251] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:26.971 [2024-11-20 10:36:07.688442] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:27.229 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:27.229 malloc0 00:18:27.229 10:36:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:27.487 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.h0hCpaiwiV 00:18:27.752 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:27.752 10:36:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.h0hCpaiwiV 00:18:39.957 Initializing NVMe Controllers 00:18:39.957 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:39.957 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:39.957 Initialization complete. Launching workers. 00:18:39.957 ======================================================== 00:18:39.957 Latency(us) 00:18:39.957 Device Information : IOPS MiB/s Average min max 00:18:39.957 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16874.74 65.92 3792.71 850.27 4774.62 00:18:39.957 ======================================================== 00:18:39.957 Total : 16874.74 65.92 3792.71 850.27 4774.62 00:18:39.957 00:18:39.957 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@139 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.h0hCpaiwiV 00:18:39.957 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:39.957 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:39.957 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:39.957 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.h0hCpaiwiV 00:18:39.957 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:39.957 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3241373 00:18:39.957 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:39.957 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:39.957 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3241373 /var/tmp/bdevperf.sock 00:18:39.957 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3241373 ']' 00:18:39.957 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:39.957 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:39.957 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:39.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:39.957 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:39.957 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:39.957 [2024-11-20 10:36:18.610638] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:39.957 [2024-11-20 10:36:18.610690] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3241373 ] 00:18:39.957 [2024-11-20 10:36:18.687056] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:39.957 [2024-11-20 10:36:18.728775] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:39.957 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:39.957 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:39.957 10:36:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.h0hCpaiwiV 00:18:39.957 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:39.957 [2024-11-20 10:36:19.167783] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:39.957 TLSTESTn1 00:18:39.957 10:36:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:39.957 Running I/O for 10 seconds... 00:18:40.895 5090.00 IOPS, 19.88 MiB/s [2024-11-20T09:36:22.563Z] 5303.00 IOPS, 20.71 MiB/s [2024-11-20T09:36:23.500Z] 5214.00 IOPS, 20.37 MiB/s [2024-11-20T09:36:24.437Z] 5108.00 IOPS, 19.95 MiB/s [2024-11-20T09:36:25.376Z] 5108.80 IOPS, 19.96 MiB/s [2024-11-20T09:36:26.754Z] 5094.33 IOPS, 19.90 MiB/s [2024-11-20T09:36:27.691Z] 5075.86 IOPS, 19.83 MiB/s [2024-11-20T09:36:28.627Z] 5003.25 IOPS, 19.54 MiB/s [2024-11-20T09:36:29.563Z] 5015.11 IOPS, 19.59 MiB/s [2024-11-20T09:36:29.563Z] 5012.00 IOPS, 19.58 MiB/s 00:18:48.832 Latency(us) 00:18:48.832 [2024-11-20T09:36:29.563Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.832 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:18:48.832 Verification LBA range: start 0x0 length 0x2000 00:18:48.832 TLSTESTn1 : 10.02 5016.18 19.59 0.00 0.00 25482.17 5898.24 33204.91 00:18:48.832 [2024-11-20T09:36:29.563Z] =================================================================================================================== 00:18:48.832 [2024-11-20T09:36:29.563Z] Total : 5016.18 19.59 0.00 0.00 25482.17 5898.24 33204.91 00:18:48.832 { 00:18:48.832 "results": [ 00:18:48.832 { 00:18:48.832 "job": "TLSTESTn1", 00:18:48.832 "core_mask": "0x4", 00:18:48.832 "workload": "verify", 00:18:48.832 "status": "finished", 00:18:48.832 "verify_range": { 00:18:48.832 "start": 0, 00:18:48.832 "length": 8192 00:18:48.832 }, 00:18:48.832 "queue_depth": 128, 00:18:48.832 "io_size": 4096, 00:18:48.832 "runtime": 10.016978, 00:18:48.832 "iops": 5016.1835236136085, 00:18:48.832 "mibps": 19.594466889115658, 00:18:48.832 "io_failed": 0, 00:18:48.832 "io_timeout": 0, 00:18:48.832 "avg_latency_us": 25482.169958196984, 00:18:48.832 "min_latency_us": 5898.24, 00:18:48.832 "max_latency_us": 33204.90666666667 00:18:48.832 } 00:18:48.832 ], 00:18:48.832 "core_count": 1 00:18:48.832 } 00:18:48.832 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:18:48.832 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3241373 00:18:48.832 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3241373 ']' 00:18:48.832 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3241373 00:18:48.832 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:48.832 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:48.832 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3241373 00:18:48.832 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:48.832 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:48.832 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3241373' 00:18:48.832 killing process with pid 3241373 00:18:48.832 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3241373 00:18:48.832 Received shutdown signal, test time was about 10.000000 seconds 00:18:48.832 00:18:48.832 Latency(us) 00:18:48.832 [2024-11-20T09:36:29.563Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:48.832 [2024-11-20T09:36:29.563Z] =================================================================================================================== 00:18:48.832 [2024-11-20T09:36:29.563Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:18:48.832 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3241373 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@142 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4RZQ7RZ1vO 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4RZQ7RZ1vO 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.4RZQ7RZ1vO 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.4RZQ7RZ1vO 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3243207 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3243207 /var/tmp/bdevperf.sock 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3243207 ']' 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.092 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.092 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.092 [2024-11-20 10:36:29.673796] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:49.092 [2024-11-20 10:36:29.673843] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3243207 ] 00:18:49.092 [2024-11-20 10:36:29.747796] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:49.092 [2024-11-20 10:36:29.787039] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:49.351 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:49.351 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:49.351 10:36:29 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.4RZQ7RZ1vO 00:18:49.351 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:49.610 [2024-11-20 10:36:30.257056] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:49.610 [2024-11-20 10:36:30.263391] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:49.610 [2024-11-20 10:36:30.264384] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fec170 (107): Transport endpoint is not connected 00:18:49.610 [2024-11-20 10:36:30.265377] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1fec170 (9): Bad file descriptor 00:18:49.610 [2024-11-20 10:36:30.266379] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:49.610 [2024-11-20 10:36:30.266392] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:49.610 [2024-11-20 10:36:30.266399] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:49.610 [2024-11-20 10:36:30.266410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:49.610 request: 00:18:49.610 { 00:18:49.610 "name": "TLSTEST", 00:18:49.610 "trtype": "tcp", 00:18:49.610 "traddr": "10.0.0.2", 00:18:49.610 "adrfam": "ipv4", 00:18:49.610 "trsvcid": "4420", 00:18:49.610 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:49.610 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:49.610 "prchk_reftag": false, 00:18:49.610 "prchk_guard": false, 00:18:49.610 "hdgst": false, 00:18:49.610 "ddgst": false, 00:18:49.610 "psk": "key0", 00:18:49.610 "allow_unrecognized_csi": false, 00:18:49.610 "method": "bdev_nvme_attach_controller", 00:18:49.610 "req_id": 1 00:18:49.610 } 00:18:49.610 Got JSON-RPC error response 00:18:49.610 response: 00:18:49.610 { 00:18:49.610 "code": -5, 00:18:49.610 "message": "Input/output error" 00:18:49.610 } 00:18:49.610 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3243207 00:18:49.610 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3243207 ']' 00:18:49.610 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3243207 00:18:49.610 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:49.610 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:49.610 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3243207 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3243207' 00:18:49.869 killing process with pid 3243207 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3243207 00:18:49.869 Received shutdown signal, test time was about 10.000000 seconds 00:18:49.869 00:18:49.869 Latency(us) 00:18:49.869 [2024-11-20T09:36:30.600Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:49.869 [2024-11-20T09:36:30.600Z] =================================================================================================================== 00:18:49.869 [2024-11-20T09:36:30.600Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3243207 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@145 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.h0hCpaiwiV 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.h0hCpaiwiV 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.h0hCpaiwiV 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.h0hCpaiwiV 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3243438 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3243438 /var/tmp/bdevperf.sock 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3243438 ']' 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:49.869 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:49.869 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:49.869 [2024-11-20 10:36:30.551004] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:49.869 [2024-11-20 10:36:30.551049] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3243438 ] 00:18:50.128 [2024-11-20 10:36:30.621357] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.128 [2024-11-20 10:36:30.662960] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.128 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.128 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:50.128 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.h0hCpaiwiV 00:18:50.386 10:36:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk key0 00:18:50.386 [2024-11-20 10:36:31.112840] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:50.644 [2024-11-20 10:36:31.117501] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:50.644 [2024-11-20 10:36:31.117524] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:18:50.644 [2024-11-20 10:36:31.117548] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:50.644 [2024-11-20 10:36:31.118182] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c5170 (107): Transport endpoint is not connected 00:18:50.644 [2024-11-20 10:36:31.119173] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x24c5170 (9): Bad file descriptor 00:18:50.644 [2024-11-20 10:36:31.120175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] Ctrlr is in error state 00:18:50.644 [2024-11-20 10:36:31.120186] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:50.644 [2024-11-20 10:36:31.120194] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode1, Operation not permitted 00:18:50.644 [2024-11-20 10:36:31.120208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 0] in failed state. 00:18:50.644 request: 00:18:50.644 { 00:18:50.644 "name": "TLSTEST", 00:18:50.644 "trtype": "tcp", 00:18:50.644 "traddr": "10.0.0.2", 00:18:50.644 "adrfam": "ipv4", 00:18:50.644 "trsvcid": "4420", 00:18:50.644 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:50.644 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:18:50.644 "prchk_reftag": false, 00:18:50.644 "prchk_guard": false, 00:18:50.645 "hdgst": false, 00:18:50.645 "ddgst": false, 00:18:50.645 "psk": "key0", 00:18:50.645 "allow_unrecognized_csi": false, 00:18:50.645 "method": "bdev_nvme_attach_controller", 00:18:50.645 "req_id": 1 00:18:50.645 } 00:18:50.645 Got JSON-RPC error response 00:18:50.645 response: 00:18:50.645 { 00:18:50.645 "code": -5, 00:18:50.645 "message": "Input/output error" 00:18:50.645 } 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3243438 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3243438 ']' 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3243438 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3243438 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3243438' 00:18:50.645 killing process with pid 3243438 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3243438 00:18:50.645 Received shutdown signal, test time was about 10.000000 seconds 00:18:50.645 00:18:50.645 Latency(us) 00:18:50.645 [2024-11-20T09:36:31.376Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:50.645 [2024-11-20T09:36:31.376Z] =================================================================================================================== 00:18:50.645 [2024-11-20T09:36:31.376Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3243438 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@148 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.h0hCpaiwiV 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.h0hCpaiwiV 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.h0hCpaiwiV 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.h0hCpaiwiV 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3243464 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3243464 /var/tmp/bdevperf.sock 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3243464 ']' 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:50.645 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:50.645 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:50.904 [2024-11-20 10:36:31.384185] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:50.904 [2024-11-20 10:36:31.384236] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3243464 ] 00:18:50.904 [2024-11-20 10:36:31.457934] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:50.904 [2024-11-20 10:36:31.499777] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:50.904 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:50.904 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:50.904 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.h0hCpaiwiV 00:18:51.162 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:51.535 [2024-11-20 10:36:31.938853] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:51.535 [2024-11-20 10:36:31.947542] tcp.c: 969:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:51.535 [2024-11-20 10:36:31.947564] posix.c: 573:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:18:51.535 [2024-11-20 10:36:31.947585] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:18:51.535 [2024-11-20 10:36:31.948151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2138170 (107): Transport endpoint is not connected 00:18:51.535 [2024-11-20 10:36:31.949143] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2138170 (9): Bad file descriptor 00:18:51.535 [2024-11-20 10:36:31.950145] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] Ctrlr is in error state 00:18:51.535 [2024-11-20 10:36:31.950156] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:18:51.535 [2024-11-20 10:36:31.950163] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode2, Operation not permitted 00:18:51.535 [2024-11-20 10:36:31.950172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 0] in failed state. 00:18:51.535 request: 00:18:51.535 { 00:18:51.535 "name": "TLSTEST", 00:18:51.535 "trtype": "tcp", 00:18:51.535 "traddr": "10.0.0.2", 00:18:51.535 "adrfam": "ipv4", 00:18:51.535 "trsvcid": "4420", 00:18:51.535 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:18:51.535 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:51.535 "prchk_reftag": false, 00:18:51.535 "prchk_guard": false, 00:18:51.535 "hdgst": false, 00:18:51.535 "ddgst": false, 00:18:51.535 "psk": "key0", 00:18:51.535 "allow_unrecognized_csi": false, 00:18:51.535 "method": "bdev_nvme_attach_controller", 00:18:51.535 "req_id": 1 00:18:51.535 } 00:18:51.535 Got JSON-RPC error response 00:18:51.535 response: 00:18:51.535 { 00:18:51.535 "code": -5, 00:18:51.535 "message": "Input/output error" 00:18:51.535 } 00:18:51.535 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3243464 00:18:51.535 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3243464 ']' 00:18:51.535 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3243464 00:18:51.535 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:51.535 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:51.535 10:36:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3243464 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3243464' 00:18:51.535 killing process with pid 3243464 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3243464 00:18:51.535 Received shutdown signal, test time was about 10.000000 seconds 00:18:51.535 00:18:51.535 Latency(us) 00:18:51.535 [2024-11-20T09:36:32.266Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:51.535 [2024-11-20T09:36:32.266Z] =================================================================================================================== 00:18:51.535 [2024-11-20T09:36:32.266Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3243464 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@151 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3243688 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3243688 /var/tmp/bdevperf.sock 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3243688 ']' 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:51.535 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:51.535 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:51.535 [2024-11-20 10:36:32.231172] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:51.535 [2024-11-20 10:36:32.231230] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3243688 ] 00:18:51.793 [2024-11-20 10:36:32.308151] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:51.793 [2024-11-20 10:36:32.345372] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:51.793 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:51.793 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:51.793 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 '' 00:18:52.052 [2024-11-20 10:36:32.602189] keyring.c: 24:keyring_file_check_path: *ERROR*: Non-absolute paths are not allowed: 00:18:52.052 [2024-11-20 10:36:32.602224] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:18:52.052 request: 00:18:52.052 { 00:18:52.052 "name": "key0", 00:18:52.052 "path": "", 00:18:52.052 "method": "keyring_file_add_key", 00:18:52.052 "req_id": 1 00:18:52.052 } 00:18:52.052 Got JSON-RPC error response 00:18:52.052 response: 00:18:52.052 { 00:18:52.052 "code": -1, 00:18:52.052 "message": "Operation not permitted" 00:18:52.052 } 00:18:52.052 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:52.310 [2024-11-20 10:36:32.794777] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:52.310 [2024-11-20 10:36:32.794803] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:18:52.310 request: 00:18:52.310 { 00:18:52.310 "name": "TLSTEST", 00:18:52.310 "trtype": "tcp", 00:18:52.310 "traddr": "10.0.0.2", 00:18:52.310 "adrfam": "ipv4", 00:18:52.310 "trsvcid": "4420", 00:18:52.310 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:52.310 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:52.310 "prchk_reftag": false, 00:18:52.310 "prchk_guard": false, 00:18:52.310 "hdgst": false, 00:18:52.310 "ddgst": false, 00:18:52.310 "psk": "key0", 00:18:52.310 "allow_unrecognized_csi": false, 00:18:52.310 "method": "bdev_nvme_attach_controller", 00:18:52.310 "req_id": 1 00:18:52.310 } 00:18:52.310 Got JSON-RPC error response 00:18:52.310 response: 00:18:52.310 { 00:18:52.310 "code": -126, 00:18:52.310 "message": "Required key not available" 00:18:52.310 } 00:18:52.310 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3243688 00:18:52.310 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3243688 ']' 00:18:52.310 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3243688 00:18:52.310 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.310 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.310 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3243688 00:18:52.310 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:18:52.310 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:18:52.310 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3243688' 00:18:52.310 killing process with pid 3243688 00:18:52.310 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3243688 00:18:52.310 Received shutdown signal, test time was about 10.000000 seconds 00:18:52.310 00:18:52.310 Latency(us) 00:18:52.310 [2024-11-20T09:36:33.041Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:18:52.310 [2024-11-20T09:36:33.041Z] =================================================================================================================== 00:18:52.310 [2024-11-20T09:36:33.041Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:18:52.310 10:36:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3243688 00:18:52.310 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:18:52.310 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:18:52.310 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:18:52.310 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:18:52.310 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:18:52.310 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@154 -- # killprocess 3238638 00:18:52.310 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3238638 ']' 00:18:52.310 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3238638 00:18:52.310 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:18:52.310 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:18:52.310 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3238638 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3238638' 00:18:52.569 killing process with pid 3238638 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3238638 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3238638 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@504 -- # local prefix key digest 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@506 -- # digest=2 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@507 -- # python - 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # mktemp 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@156 -- # key_long_path=/tmp/tmp.iRqze5yeVD 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@157 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # chmod 0600 /tmp/tmp.iRqze5yeVD 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # nvmfappstart -m 0x2 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:52.569 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.826 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=3243935 00:18:52.826 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:18:52.826 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 3243935 00:18:52.826 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3243935 ']' 00:18:52.826 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:52.826 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:52.826 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:52.826 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:52.826 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:52.826 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:52.826 [2024-11-20 10:36:33.345151] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:52.826 [2024-11-20 10:36:33.345211] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:52.826 [2024-11-20 10:36:33.423365] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:52.826 [2024-11-20 10:36:33.459018] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:52.826 [2024-11-20 10:36:33.459053] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:52.826 [2024-11-20 10:36:33.459059] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:52.826 [2024-11-20 10:36:33.459065] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:52.826 [2024-11-20 10:36:33.459070] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:52.826 [2024-11-20 10:36:33.459660] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:18:53.084 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:53.084 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:53.084 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:18:53.084 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:53.084 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:53.084 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:53.084 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # setup_nvmf_tgt /tmp/tmp.iRqze5yeVD 00:18:53.084 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.iRqze5yeVD 00:18:53.084 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:53.084 [2024-11-20 10:36:33.762035] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:53.084 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:53.342 10:36:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:53.601 [2024-11-20 10:36:34.151047] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:53.601 [2024-11-20 10:36:34.151253] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:53.601 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:53.860 malloc0 00:18:53.860 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:54.119 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.iRqze5yeVD 00:18:54.119 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:18:54.379 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iRqze5yeVD 00:18:54.379 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:54.379 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:54.379 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:54.379 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iRqze5yeVD 00:18:54.379 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:54.379 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:54.379 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3244193 00:18:54.379 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:54.379 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3244193 /var/tmp/bdevperf.sock 00:18:54.379 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3244193 ']' 00:18:54.379 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:54.379 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:18:54.379 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:54.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:54.379 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:18:54.379 10:36:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:54.379 [2024-11-20 10:36:35.003929] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:18:54.379 [2024-11-20 10:36:35.003977] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3244193 ] 00:18:54.379 [2024-11-20 10:36:35.070987] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:54.638 [2024-11-20 10:36:35.111459] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:18:54.638 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:18:54.638 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:18:54.638 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iRqze5yeVD 00:18:54.896 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:18:54.896 [2024-11-20 10:36:35.565569] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:55.156 TLSTESTn1 00:18:55.156 10:36:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:55.156 Running I/O for 10 seconds... 00:18:57.468 5245.00 IOPS, 20.49 MiB/s [2024-11-20T09:36:38.766Z] 5402.50 IOPS, 21.10 MiB/s [2024-11-20T09:36:40.193Z] 5466.00 IOPS, 21.35 MiB/s [2024-11-20T09:36:40.801Z] 5445.75 IOPS, 21.27 MiB/s [2024-11-20T09:36:42.177Z] 5463.00 IOPS, 21.34 MiB/s [2024-11-20T09:36:43.112Z] 5494.50 IOPS, 21.46 MiB/s [2024-11-20T09:36:44.048Z] 5495.29 IOPS, 21.47 MiB/s [2024-11-20T09:36:44.982Z] 5509.38 IOPS, 21.52 MiB/s [2024-11-20T09:36:45.921Z] 5511.00 IOPS, 21.53 MiB/s [2024-11-20T09:36:45.921Z] 5519.30 IOPS, 21.56 MiB/s 00:19:05.190 Latency(us) 00:19:05.190 [2024-11-20T09:36:45.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.190 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:05.190 Verification LBA range: start 0x0 length 0x2000 00:19:05.190 TLSTESTn1 : 10.01 5525.11 21.58 0.00 0.00 23133.99 5086.84 33204.91 00:19:05.190 [2024-11-20T09:36:45.921Z] =================================================================================================================== 00:19:05.190 [2024-11-20T09:36:45.921Z] Total : 5525.11 21.58 0.00 0.00 23133.99 5086.84 33204.91 00:19:05.190 { 00:19:05.190 "results": [ 00:19:05.190 { 00:19:05.190 "job": "TLSTESTn1", 00:19:05.190 "core_mask": "0x4", 00:19:05.190 "workload": "verify", 00:19:05.190 "status": "finished", 00:19:05.190 "verify_range": { 00:19:05.190 "start": 0, 00:19:05.190 "length": 8192 00:19:05.190 }, 00:19:05.190 "queue_depth": 128, 00:19:05.190 "io_size": 4096, 00:19:05.190 "runtime": 10.012289, 00:19:05.190 "iops": 5525.110192084947, 00:19:05.190 "mibps": 21.582461687831824, 00:19:05.190 "io_failed": 0, 00:19:05.190 "io_timeout": 0, 00:19:05.190 "avg_latency_us": 23133.989847352885, 00:19:05.190 "min_latency_us": 5086.8419047619045, 00:19:05.190 "max_latency_us": 33204.90666666667 00:19:05.190 } 00:19:05.190 ], 00:19:05.190 "core_count": 1 00:19:05.190 } 00:19:05.190 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:05.190 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@46 -- # killprocess 3244193 00:19:05.190 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3244193 ']' 00:19:05.190 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3244193 00:19:05.190 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:05.190 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.190 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3244193 00:19:05.190 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:05.190 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:05.190 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3244193' 00:19:05.190 killing process with pid 3244193 00:19:05.190 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3244193 00:19:05.190 Received shutdown signal, test time was about 10.000000 seconds 00:19:05.190 00:19:05.190 Latency(us) 00:19:05.190 [2024-11-20T09:36:45.921Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.190 [2024-11-20T09:36:45.921Z] =================================================================================================================== 00:19:05.190 [2024-11-20T09:36:45.921Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:05.190 10:36:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3244193 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@166 -- # chmod 0666 /tmp/tmp.iRqze5yeVD 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iRqze5yeVD 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iRqze5yeVD 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=run_bdevperf 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t run_bdevperf 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.iRqze5yeVD 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk=/tmp/tmp.iRqze5yeVD 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=3246036 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 3246036 /var/tmp/bdevperf.sock 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3246036 ']' 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:05.449 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:05.450 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:05.450 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:05.450 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:05.450 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:05.450 [2024-11-20 10:36:46.082078] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:05.450 [2024-11-20 10:36:46.082129] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3246036 ] 00:19:05.450 [2024-11-20 10:36:46.145917] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:05.709 [2024-11-20 10:36:46.183356] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:05.709 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:05.709 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:05.709 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iRqze5yeVD 00:19:05.968 [2024-11-20 10:36:46.445047] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.iRqze5yeVD': 0100666 00:19:05.968 [2024-11-20 10:36:46.445079] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:05.968 request: 00:19:05.968 { 00:19:05.968 "name": "key0", 00:19:05.968 "path": "/tmp/tmp.iRqze5yeVD", 00:19:05.968 "method": "keyring_file_add_key", 00:19:05.968 "req_id": 1 00:19:05.968 } 00:19:05.968 Got JSON-RPC error response 00:19:05.968 response: 00:19:05.968 { 00:19:05.968 "code": -1, 00:19:05.968 "message": "Operation not permitted" 00:19:05.968 } 00:19:05.968 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:05.968 [2024-11-20 10:36:46.625596] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:05.968 [2024-11-20 10:36:46.625623] bdev_nvme.c:6716:spdk_bdev_nvme_create: *ERROR*: Could not load PSK: key0 00:19:05.968 request: 00:19:05.968 { 00:19:05.968 "name": "TLSTEST", 00:19:05.968 "trtype": "tcp", 00:19:05.968 "traddr": "10.0.0.2", 00:19:05.968 "adrfam": "ipv4", 00:19:05.968 "trsvcid": "4420", 00:19:05.968 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:05.968 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:05.968 "prchk_reftag": false, 00:19:05.968 "prchk_guard": false, 00:19:05.968 "hdgst": false, 00:19:05.968 "ddgst": false, 00:19:05.968 "psk": "key0", 00:19:05.968 "allow_unrecognized_csi": false, 00:19:05.968 "method": "bdev_nvme_attach_controller", 00:19:05.968 "req_id": 1 00:19:05.968 } 00:19:05.968 Got JSON-RPC error response 00:19:05.968 response: 00:19:05.968 { 00:19:05.968 "code": -126, 00:19:05.968 "message": "Required key not available" 00:19:05.968 } 00:19:05.968 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # killprocess 3246036 00:19:05.968 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3246036 ']' 00:19:05.968 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3246036 00:19:05.968 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:05.968 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:05.968 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3246036 00:19:05.968 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:05.968 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:05.968 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3246036' 00:19:05.968 killing process with pid 3246036 00:19:05.968 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3246036 00:19:05.968 Received shutdown signal, test time was about 10.000000 seconds 00:19:05.968 00:19:05.968 Latency(us) 00:19:05.968 [2024-11-20T09:36:46.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:05.968 [2024-11-20T09:36:46.699Z] =================================================================================================================== 00:19:05.968 [2024-11-20T09:36:46.699Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:05.968 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3246036 00:19:06.227 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@38 -- # return 1 00:19:06.227 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:06.227 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:06.227 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:06.227 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:06.227 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # killprocess 3243935 00:19:06.227 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3243935 ']' 00:19:06.227 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3243935 00:19:06.227 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:06.227 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:06.227 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3243935 00:19:06.227 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:06.227 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:06.227 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3243935' 00:19:06.227 killing process with pid 3243935 00:19:06.227 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3243935 00:19:06.227 10:36:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3243935 00:19:06.485 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # nvmfappstart -m 0x2 00:19:06.485 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:06.485 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:06.485 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.485 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=3246124 00:19:06.485 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:06.485 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 3246124 00:19:06.485 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3246124 ']' 00:19:06.485 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:06.485 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:06.485 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:06.485 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:06.485 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:06.485 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.485 [2024-11-20 10:36:47.092371] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:06.485 [2024-11-20 10:36:47.092431] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:06.485 [2024-11-20 10:36:47.173319] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:06.744 [2024-11-20 10:36:47.213705] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:06.744 [2024-11-20 10:36:47.213741] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:06.744 [2024-11-20 10:36:47.213748] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:06.744 [2024-11-20 10:36:47.213754] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:06.744 [2024-11-20 10:36:47.213760] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:06.744 [2024-11-20 10:36:47.214407] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:06.744 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:06.744 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:06.744 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:06.744 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:06.744 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:06.744 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:06.744 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@173 -- # NOT setup_nvmf_tgt /tmp/tmp.iRqze5yeVD 00:19:06.744 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@652 -- # local es=0 00:19:06.744 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@654 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.iRqze5yeVD 00:19:06.744 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # local arg=setup_nvmf_tgt 00:19:06.744 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.744 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # type -t setup_nvmf_tgt 00:19:06.744 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:06.744 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # setup_nvmf_tgt /tmp/tmp.iRqze5yeVD 00:19:06.744 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.iRqze5yeVD 00:19:06.744 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:07.001 [2024-11-20 10:36:47.526023] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:07.001 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:07.259 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:07.259 [2024-11-20 10:36:47.910999] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:07.259 [2024-11-20 10:36:47.911206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:07.259 10:36:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:07.517 malloc0 00:19:07.517 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:07.775 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.iRqze5yeVD 00:19:07.775 [2024-11-20 10:36:48.500547] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.iRqze5yeVD': 0100666 00:19:07.775 [2024-11-20 10:36:48.500573] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:19:08.033 request: 00:19:08.033 { 00:19:08.033 "name": "key0", 00:19:08.033 "path": "/tmp/tmp.iRqze5yeVD", 00:19:08.033 "method": "keyring_file_add_key", 00:19:08.033 "req_id": 1 00:19:08.033 } 00:19:08.033 Got JSON-RPC error response 00:19:08.033 response: 00:19:08.033 { 00:19:08.033 "code": -1, 00:19:08.033 "message": "Operation not permitted" 00:19:08.033 } 00:19:08.033 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:08.033 [2024-11-20 10:36:48.685050] tcp.c:3792:nvmf_tcp_subsystem_add_host: *ERROR*: Key 'key0' does not exist 00:19:08.033 [2024-11-20 10:36:48.685085] subsystem.c:1051:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:08.033 request: 00:19:08.033 { 00:19:08.033 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:08.033 "host": "nqn.2016-06.io.spdk:host1", 00:19:08.033 "psk": "key0", 00:19:08.033 "method": "nvmf_subsystem_add_host", 00:19:08.033 "req_id": 1 00:19:08.033 } 00:19:08.033 Got JSON-RPC error response 00:19:08.033 response: 00:19:08.033 { 00:19:08.033 "code": -32603, 00:19:08.033 "message": "Internal error" 00:19:08.033 } 00:19:08.033 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@655 -- # es=1 00:19:08.033 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:08.033 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:08.033 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:08.033 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@176 -- # killprocess 3246124 00:19:08.033 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3246124 ']' 00:19:08.034 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3246124 00:19:08.034 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:08.034 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:08.034 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3246124 00:19:08.291 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:08.291 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:08.291 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3246124' 00:19:08.291 killing process with pid 3246124 00:19:08.291 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3246124 00:19:08.291 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3246124 00:19:08.291 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # chmod 0600 /tmp/tmp.iRqze5yeVD 00:19:08.291 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # nvmfappstart -m 0x2 00:19:08.291 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:08.291 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:08.291 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.291 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=3246539 00:19:08.291 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:08.291 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 3246539 00:19:08.291 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3246539 ']' 00:19:08.291 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:08.291 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:08.291 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:08.291 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:08.291 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:08.291 10:36:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.291 [2024-11-20 10:36:48.982769] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:08.291 [2024-11-20 10:36:48.982818] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:08.548 [2024-11-20 10:36:49.061117] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:08.548 [2024-11-20 10:36:49.100980] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:08.548 [2024-11-20 10:36:49.101019] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:08.548 [2024-11-20 10:36:49.101026] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:08.548 [2024-11-20 10:36:49.101032] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:08.548 [2024-11-20 10:36:49.101036] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:08.548 [2024-11-20 10:36:49.101610] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:08.548 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:08.548 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:08.548 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:08.548 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:08.548 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:08.548 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:08.548 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # setup_nvmf_tgt /tmp/tmp.iRqze5yeVD 00:19:08.548 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.iRqze5yeVD 00:19:08.548 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:08.806 [2024-11-20 10:36:49.407418] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:08.806 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:09.063 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:09.063 [2024-11-20 10:36:49.752295] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:09.063 [2024-11-20 10:36:49.752495] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:09.063 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:09.322 malloc0 00:19:09.322 10:36:49 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:09.581 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.iRqze5yeVD 00:19:09.840 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:09.840 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@183 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:09.840 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # bdevperf_pid=3246800 00:19:09.840 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@186 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:09.840 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # waitforlisten 3246800 /var/tmp/bdevperf.sock 00:19:09.840 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3246800 ']' 00:19:09.840 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:09.840 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:09.840 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:09.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:09.840 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:09.840 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.840 [2024-11-20 10:36:50.555048] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:09.840 [2024-11-20 10:36:50.555094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3246800 ] 00:19:10.099 [2024-11-20 10:36:50.630532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:10.099 [2024-11-20 10:36:50.672547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.099 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:10.099 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:10.099 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iRqze5yeVD 00:19:10.357 10:36:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@189 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:10.615 [2024-11-20 10:36:51.098330] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:10.615 TLSTESTn1 00:19:10.615 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:10.874 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@193 -- # tgtconf='{ 00:19:10.874 "subsystems": [ 00:19:10.874 { 00:19:10.874 "subsystem": "keyring", 00:19:10.874 "config": [ 00:19:10.874 { 00:19:10.874 "method": "keyring_file_add_key", 00:19:10.874 "params": { 00:19:10.874 "name": "key0", 00:19:10.874 "path": "/tmp/tmp.iRqze5yeVD" 00:19:10.874 } 00:19:10.874 } 00:19:10.874 ] 00:19:10.874 }, 00:19:10.874 { 00:19:10.874 "subsystem": "iobuf", 00:19:10.874 "config": [ 00:19:10.874 { 00:19:10.874 "method": "iobuf_set_options", 00:19:10.874 "params": { 00:19:10.874 "small_pool_count": 8192, 00:19:10.874 "large_pool_count": 1024, 00:19:10.874 "small_bufsize": 8192, 00:19:10.874 "large_bufsize": 135168, 00:19:10.874 "enable_numa": false 00:19:10.874 } 00:19:10.874 } 00:19:10.874 ] 00:19:10.874 }, 00:19:10.874 { 00:19:10.874 "subsystem": "sock", 00:19:10.874 "config": [ 00:19:10.874 { 00:19:10.874 "method": "sock_set_default_impl", 00:19:10.874 "params": { 00:19:10.874 "impl_name": "posix" 00:19:10.874 } 00:19:10.874 }, 00:19:10.874 { 00:19:10.874 "method": "sock_impl_set_options", 00:19:10.874 "params": { 00:19:10.875 "impl_name": "ssl", 00:19:10.875 "recv_buf_size": 4096, 00:19:10.875 "send_buf_size": 4096, 00:19:10.875 "enable_recv_pipe": true, 00:19:10.875 "enable_quickack": false, 00:19:10.875 "enable_placement_id": 0, 00:19:10.875 "enable_zerocopy_send_server": true, 00:19:10.875 "enable_zerocopy_send_client": false, 00:19:10.875 "zerocopy_threshold": 0, 00:19:10.875 "tls_version": 0, 00:19:10.875 "enable_ktls": false 00:19:10.875 } 00:19:10.875 }, 00:19:10.875 { 00:19:10.875 "method": "sock_impl_set_options", 00:19:10.875 "params": { 00:19:10.875 "impl_name": "posix", 00:19:10.875 "recv_buf_size": 2097152, 00:19:10.875 "send_buf_size": 2097152, 00:19:10.875 "enable_recv_pipe": true, 00:19:10.875 "enable_quickack": false, 00:19:10.875 "enable_placement_id": 0, 00:19:10.875 "enable_zerocopy_send_server": true, 00:19:10.875 "enable_zerocopy_send_client": false, 00:19:10.875 "zerocopy_threshold": 0, 00:19:10.875 "tls_version": 0, 00:19:10.875 "enable_ktls": false 00:19:10.875 } 00:19:10.875 } 00:19:10.875 ] 00:19:10.875 }, 00:19:10.875 { 00:19:10.875 "subsystem": "vmd", 00:19:10.875 "config": [] 00:19:10.875 }, 00:19:10.875 { 00:19:10.875 "subsystem": "accel", 00:19:10.875 "config": [ 00:19:10.875 { 00:19:10.875 "method": "accel_set_options", 00:19:10.875 "params": { 00:19:10.875 "small_cache_size": 128, 00:19:10.875 "large_cache_size": 16, 00:19:10.875 "task_count": 2048, 00:19:10.875 "sequence_count": 2048, 00:19:10.875 "buf_count": 2048 00:19:10.875 } 00:19:10.875 } 00:19:10.875 ] 00:19:10.875 }, 00:19:10.875 { 00:19:10.875 "subsystem": "bdev", 00:19:10.875 "config": [ 00:19:10.875 { 00:19:10.875 "method": "bdev_set_options", 00:19:10.875 "params": { 00:19:10.875 "bdev_io_pool_size": 65535, 00:19:10.875 "bdev_io_cache_size": 256, 00:19:10.875 "bdev_auto_examine": true, 00:19:10.875 "iobuf_small_cache_size": 128, 00:19:10.875 "iobuf_large_cache_size": 16 00:19:10.875 } 00:19:10.875 }, 00:19:10.875 { 00:19:10.875 "method": "bdev_raid_set_options", 00:19:10.875 "params": { 00:19:10.875 "process_window_size_kb": 1024, 00:19:10.875 "process_max_bandwidth_mb_sec": 0 00:19:10.875 } 00:19:10.875 }, 00:19:10.875 { 00:19:10.875 "method": "bdev_iscsi_set_options", 00:19:10.875 "params": { 00:19:10.875 "timeout_sec": 30 00:19:10.875 } 00:19:10.875 }, 00:19:10.875 { 00:19:10.875 "method": "bdev_nvme_set_options", 00:19:10.875 "params": { 00:19:10.875 "action_on_timeout": "none", 00:19:10.875 "timeout_us": 0, 00:19:10.875 "timeout_admin_us": 0, 00:19:10.875 "keep_alive_timeout_ms": 10000, 00:19:10.875 "arbitration_burst": 0, 00:19:10.875 "low_priority_weight": 0, 00:19:10.875 "medium_priority_weight": 0, 00:19:10.875 "high_priority_weight": 0, 00:19:10.875 "nvme_adminq_poll_period_us": 10000, 00:19:10.875 "nvme_ioq_poll_period_us": 0, 00:19:10.875 "io_queue_requests": 0, 00:19:10.875 "delay_cmd_submit": true, 00:19:10.875 "transport_retry_count": 4, 00:19:10.875 "bdev_retry_count": 3, 00:19:10.875 "transport_ack_timeout": 0, 00:19:10.875 "ctrlr_loss_timeout_sec": 0, 00:19:10.875 "reconnect_delay_sec": 0, 00:19:10.875 "fast_io_fail_timeout_sec": 0, 00:19:10.875 "disable_auto_failback": false, 00:19:10.875 "generate_uuids": false, 00:19:10.875 "transport_tos": 0, 00:19:10.875 "nvme_error_stat": false, 00:19:10.875 "rdma_srq_size": 0, 00:19:10.875 "io_path_stat": false, 00:19:10.875 "allow_accel_sequence": false, 00:19:10.875 "rdma_max_cq_size": 0, 00:19:10.875 "rdma_cm_event_timeout_ms": 0, 00:19:10.875 "dhchap_digests": [ 00:19:10.875 "sha256", 00:19:10.875 "sha384", 00:19:10.875 "sha512" 00:19:10.875 ], 00:19:10.875 "dhchap_dhgroups": [ 00:19:10.875 "null", 00:19:10.875 "ffdhe2048", 00:19:10.875 "ffdhe3072", 00:19:10.875 "ffdhe4096", 00:19:10.875 "ffdhe6144", 00:19:10.875 "ffdhe8192" 00:19:10.875 ] 00:19:10.875 } 00:19:10.875 }, 00:19:10.875 { 00:19:10.875 "method": "bdev_nvme_set_hotplug", 00:19:10.875 "params": { 00:19:10.875 "period_us": 100000, 00:19:10.875 "enable": false 00:19:10.875 } 00:19:10.875 }, 00:19:10.875 { 00:19:10.875 "method": "bdev_malloc_create", 00:19:10.875 "params": { 00:19:10.875 "name": "malloc0", 00:19:10.875 "num_blocks": 8192, 00:19:10.875 "block_size": 4096, 00:19:10.875 "physical_block_size": 4096, 00:19:10.875 "uuid": "d08fc8ad-e765-43db-9efd-f4bcb187d21f", 00:19:10.875 "optimal_io_boundary": 0, 00:19:10.875 "md_size": 0, 00:19:10.875 "dif_type": 0, 00:19:10.875 "dif_is_head_of_md": false, 00:19:10.875 "dif_pi_format": 0 00:19:10.875 } 00:19:10.875 }, 00:19:10.875 { 00:19:10.875 "method": "bdev_wait_for_examine" 00:19:10.875 } 00:19:10.875 ] 00:19:10.875 }, 00:19:10.875 { 00:19:10.875 "subsystem": "nbd", 00:19:10.875 "config": [] 00:19:10.875 }, 00:19:10.875 { 00:19:10.875 "subsystem": "scheduler", 00:19:10.875 "config": [ 00:19:10.875 { 00:19:10.875 "method": "framework_set_scheduler", 00:19:10.875 "params": { 00:19:10.875 "name": "static" 00:19:10.875 } 00:19:10.875 } 00:19:10.875 ] 00:19:10.875 }, 00:19:10.875 { 00:19:10.875 "subsystem": "nvmf", 00:19:10.875 "config": [ 00:19:10.875 { 00:19:10.875 "method": "nvmf_set_config", 00:19:10.875 "params": { 00:19:10.875 "discovery_filter": "match_any", 00:19:10.875 "admin_cmd_passthru": { 00:19:10.875 "identify_ctrlr": false 00:19:10.875 }, 00:19:10.875 "dhchap_digests": [ 00:19:10.875 "sha256", 00:19:10.875 "sha384", 00:19:10.875 "sha512" 00:19:10.875 ], 00:19:10.875 "dhchap_dhgroups": [ 00:19:10.875 "null", 00:19:10.875 "ffdhe2048", 00:19:10.875 "ffdhe3072", 00:19:10.875 "ffdhe4096", 00:19:10.875 "ffdhe6144", 00:19:10.875 "ffdhe8192" 00:19:10.875 ] 00:19:10.875 } 00:19:10.875 }, 00:19:10.875 { 00:19:10.875 "method": "nvmf_set_max_subsystems", 00:19:10.875 "params": { 00:19:10.875 "max_subsystems": 1024 00:19:10.875 } 00:19:10.875 }, 00:19:10.875 { 00:19:10.875 "method": "nvmf_set_crdt", 00:19:10.875 "params": { 00:19:10.875 "crdt1": 0, 00:19:10.875 "crdt2": 0, 00:19:10.875 "crdt3": 0 00:19:10.875 } 00:19:10.875 }, 00:19:10.875 { 00:19:10.875 "method": "nvmf_create_transport", 00:19:10.875 "params": { 00:19:10.875 "trtype": "TCP", 00:19:10.875 "max_queue_depth": 128, 00:19:10.875 "max_io_qpairs_per_ctrlr": 127, 00:19:10.875 "in_capsule_data_size": 4096, 00:19:10.875 "max_io_size": 131072, 00:19:10.875 "io_unit_size": 131072, 00:19:10.875 "max_aq_depth": 128, 00:19:10.875 "num_shared_buffers": 511, 00:19:10.875 "buf_cache_size": 4294967295, 00:19:10.875 "dif_insert_or_strip": false, 00:19:10.875 "zcopy": false, 00:19:10.875 "c2h_success": false, 00:19:10.875 "sock_priority": 0, 00:19:10.875 "abort_timeout_sec": 1, 00:19:10.875 "ack_timeout": 0, 00:19:10.875 "data_wr_pool_size": 0 00:19:10.875 } 00:19:10.875 }, 00:19:10.875 { 00:19:10.875 "method": "nvmf_create_subsystem", 00:19:10.875 "params": { 00:19:10.875 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.875 "allow_any_host": false, 00:19:10.875 "serial_number": "SPDK00000000000001", 00:19:10.875 "model_number": "SPDK bdev Controller", 00:19:10.875 "max_namespaces": 10, 00:19:10.875 "min_cntlid": 1, 00:19:10.875 "max_cntlid": 65519, 00:19:10.875 "ana_reporting": false 00:19:10.875 } 00:19:10.875 }, 00:19:10.875 { 00:19:10.876 "method": "nvmf_subsystem_add_host", 00:19:10.876 "params": { 00:19:10.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.876 "host": "nqn.2016-06.io.spdk:host1", 00:19:10.876 "psk": "key0" 00:19:10.876 } 00:19:10.876 }, 00:19:10.876 { 00:19:10.876 "method": "nvmf_subsystem_add_ns", 00:19:10.876 "params": { 00:19:10.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.876 "namespace": { 00:19:10.876 "nsid": 1, 00:19:10.876 "bdev_name": "malloc0", 00:19:10.876 "nguid": "D08FC8ADE76543DB9EFDF4BCB187D21F", 00:19:10.876 "uuid": "d08fc8ad-e765-43db-9efd-f4bcb187d21f", 00:19:10.876 "no_auto_visible": false 00:19:10.876 } 00:19:10.876 } 00:19:10.876 }, 00:19:10.876 { 00:19:10.876 "method": "nvmf_subsystem_add_listener", 00:19:10.876 "params": { 00:19:10.876 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.876 "listen_address": { 00:19:10.876 "trtype": "TCP", 00:19:10.876 "adrfam": "IPv4", 00:19:10.876 "traddr": "10.0.0.2", 00:19:10.876 "trsvcid": "4420" 00:19:10.876 }, 00:19:10.876 "secure_channel": true 00:19:10.876 } 00:19:10.876 } 00:19:10.876 ] 00:19:10.876 } 00:19:10.876 ] 00:19:10.876 }' 00:19:10.876 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:11.135 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@194 -- # bdevperfconf='{ 00:19:11.135 "subsystems": [ 00:19:11.135 { 00:19:11.135 "subsystem": "keyring", 00:19:11.135 "config": [ 00:19:11.135 { 00:19:11.135 "method": "keyring_file_add_key", 00:19:11.135 "params": { 00:19:11.135 "name": "key0", 00:19:11.135 "path": "/tmp/tmp.iRqze5yeVD" 00:19:11.135 } 00:19:11.135 } 00:19:11.135 ] 00:19:11.135 }, 00:19:11.135 { 00:19:11.135 "subsystem": "iobuf", 00:19:11.135 "config": [ 00:19:11.135 { 00:19:11.135 "method": "iobuf_set_options", 00:19:11.135 "params": { 00:19:11.135 "small_pool_count": 8192, 00:19:11.135 "large_pool_count": 1024, 00:19:11.135 "small_bufsize": 8192, 00:19:11.135 "large_bufsize": 135168, 00:19:11.135 "enable_numa": false 00:19:11.135 } 00:19:11.135 } 00:19:11.135 ] 00:19:11.135 }, 00:19:11.135 { 00:19:11.135 "subsystem": "sock", 00:19:11.135 "config": [ 00:19:11.135 { 00:19:11.135 "method": "sock_set_default_impl", 00:19:11.135 "params": { 00:19:11.135 "impl_name": "posix" 00:19:11.135 } 00:19:11.135 }, 00:19:11.135 { 00:19:11.135 "method": "sock_impl_set_options", 00:19:11.135 "params": { 00:19:11.135 "impl_name": "ssl", 00:19:11.135 "recv_buf_size": 4096, 00:19:11.135 "send_buf_size": 4096, 00:19:11.135 "enable_recv_pipe": true, 00:19:11.135 "enable_quickack": false, 00:19:11.135 "enable_placement_id": 0, 00:19:11.135 "enable_zerocopy_send_server": true, 00:19:11.135 "enable_zerocopy_send_client": false, 00:19:11.135 "zerocopy_threshold": 0, 00:19:11.135 "tls_version": 0, 00:19:11.135 "enable_ktls": false 00:19:11.135 } 00:19:11.135 }, 00:19:11.135 { 00:19:11.135 "method": "sock_impl_set_options", 00:19:11.135 "params": { 00:19:11.135 "impl_name": "posix", 00:19:11.135 "recv_buf_size": 2097152, 00:19:11.135 "send_buf_size": 2097152, 00:19:11.135 "enable_recv_pipe": true, 00:19:11.135 "enable_quickack": false, 00:19:11.135 "enable_placement_id": 0, 00:19:11.135 "enable_zerocopy_send_server": true, 00:19:11.135 "enable_zerocopy_send_client": false, 00:19:11.135 "zerocopy_threshold": 0, 00:19:11.135 "tls_version": 0, 00:19:11.135 "enable_ktls": false 00:19:11.135 } 00:19:11.135 } 00:19:11.135 ] 00:19:11.135 }, 00:19:11.135 { 00:19:11.135 "subsystem": "vmd", 00:19:11.135 "config": [] 00:19:11.135 }, 00:19:11.135 { 00:19:11.135 "subsystem": "accel", 00:19:11.135 "config": [ 00:19:11.135 { 00:19:11.135 "method": "accel_set_options", 00:19:11.135 "params": { 00:19:11.135 "small_cache_size": 128, 00:19:11.135 "large_cache_size": 16, 00:19:11.135 "task_count": 2048, 00:19:11.135 "sequence_count": 2048, 00:19:11.135 "buf_count": 2048 00:19:11.135 } 00:19:11.135 } 00:19:11.135 ] 00:19:11.135 }, 00:19:11.135 { 00:19:11.135 "subsystem": "bdev", 00:19:11.135 "config": [ 00:19:11.135 { 00:19:11.135 "method": "bdev_set_options", 00:19:11.135 "params": { 00:19:11.135 "bdev_io_pool_size": 65535, 00:19:11.135 "bdev_io_cache_size": 256, 00:19:11.135 "bdev_auto_examine": true, 00:19:11.135 "iobuf_small_cache_size": 128, 00:19:11.135 "iobuf_large_cache_size": 16 00:19:11.135 } 00:19:11.135 }, 00:19:11.135 { 00:19:11.135 "method": "bdev_raid_set_options", 00:19:11.135 "params": { 00:19:11.135 "process_window_size_kb": 1024, 00:19:11.135 "process_max_bandwidth_mb_sec": 0 00:19:11.135 } 00:19:11.135 }, 00:19:11.135 { 00:19:11.135 "method": "bdev_iscsi_set_options", 00:19:11.135 "params": { 00:19:11.135 "timeout_sec": 30 00:19:11.135 } 00:19:11.135 }, 00:19:11.135 { 00:19:11.135 "method": "bdev_nvme_set_options", 00:19:11.135 "params": { 00:19:11.135 "action_on_timeout": "none", 00:19:11.135 "timeout_us": 0, 00:19:11.135 "timeout_admin_us": 0, 00:19:11.135 "keep_alive_timeout_ms": 10000, 00:19:11.135 "arbitration_burst": 0, 00:19:11.135 "low_priority_weight": 0, 00:19:11.136 "medium_priority_weight": 0, 00:19:11.136 "high_priority_weight": 0, 00:19:11.136 "nvme_adminq_poll_period_us": 10000, 00:19:11.136 "nvme_ioq_poll_period_us": 0, 00:19:11.136 "io_queue_requests": 512, 00:19:11.136 "delay_cmd_submit": true, 00:19:11.136 "transport_retry_count": 4, 00:19:11.136 "bdev_retry_count": 3, 00:19:11.136 "transport_ack_timeout": 0, 00:19:11.136 "ctrlr_loss_timeout_sec": 0, 00:19:11.136 "reconnect_delay_sec": 0, 00:19:11.136 "fast_io_fail_timeout_sec": 0, 00:19:11.136 "disable_auto_failback": false, 00:19:11.136 "generate_uuids": false, 00:19:11.136 "transport_tos": 0, 00:19:11.136 "nvme_error_stat": false, 00:19:11.136 "rdma_srq_size": 0, 00:19:11.136 "io_path_stat": false, 00:19:11.136 "allow_accel_sequence": false, 00:19:11.136 "rdma_max_cq_size": 0, 00:19:11.136 "rdma_cm_event_timeout_ms": 0, 00:19:11.136 "dhchap_digests": [ 00:19:11.136 "sha256", 00:19:11.136 "sha384", 00:19:11.136 "sha512" 00:19:11.136 ], 00:19:11.136 "dhchap_dhgroups": [ 00:19:11.136 "null", 00:19:11.136 "ffdhe2048", 00:19:11.136 "ffdhe3072", 00:19:11.136 "ffdhe4096", 00:19:11.136 "ffdhe6144", 00:19:11.136 "ffdhe8192" 00:19:11.136 ] 00:19:11.136 } 00:19:11.136 }, 00:19:11.136 { 00:19:11.136 "method": "bdev_nvme_attach_controller", 00:19:11.136 "params": { 00:19:11.136 "name": "TLSTEST", 00:19:11.136 "trtype": "TCP", 00:19:11.136 "adrfam": "IPv4", 00:19:11.136 "traddr": "10.0.0.2", 00:19:11.136 "trsvcid": "4420", 00:19:11.136 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.136 "prchk_reftag": false, 00:19:11.136 "prchk_guard": false, 00:19:11.136 "ctrlr_loss_timeout_sec": 0, 00:19:11.136 "reconnect_delay_sec": 0, 00:19:11.136 "fast_io_fail_timeout_sec": 0, 00:19:11.136 "psk": "key0", 00:19:11.136 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:11.136 "hdgst": false, 00:19:11.136 "ddgst": false, 00:19:11.136 "multipath": "multipath" 00:19:11.136 } 00:19:11.136 }, 00:19:11.136 { 00:19:11.136 "method": "bdev_nvme_set_hotplug", 00:19:11.136 "params": { 00:19:11.136 "period_us": 100000, 00:19:11.136 "enable": false 00:19:11.136 } 00:19:11.136 }, 00:19:11.136 { 00:19:11.136 "method": "bdev_wait_for_examine" 00:19:11.136 } 00:19:11.136 ] 00:19:11.136 }, 00:19:11.136 { 00:19:11.136 "subsystem": "nbd", 00:19:11.136 "config": [] 00:19:11.136 } 00:19:11.136 ] 00:19:11.136 }' 00:19:11.136 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # killprocess 3246800 00:19:11.136 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3246800 ']' 00:19:11.136 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3246800 00:19:11.136 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:11.136 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.136 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3246800 00:19:11.136 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:11.136 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:11.136 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3246800' 00:19:11.136 killing process with pid 3246800 00:19:11.136 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3246800 00:19:11.136 Received shutdown signal, test time was about 10.000000 seconds 00:19:11.136 00:19:11.136 Latency(us) 00:19:11.136 [2024-11-20T09:36:51.867Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.136 [2024-11-20T09:36:51.867Z] =================================================================================================================== 00:19:11.136 [2024-11-20T09:36:51.867Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:11.136 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3246800 00:19:11.395 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # killprocess 3246539 00:19:11.395 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3246539 ']' 00:19:11.395 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3246539 00:19:11.395 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:11.395 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:11.395 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3246539 00:19:11.395 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:11.395 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:11.395 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3246539' 00:19:11.395 killing process with pid 3246539 00:19:11.395 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3246539 00:19:11.395 10:36:51 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3246539 00:19:11.655 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:11.655 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:11.655 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:11.655 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # echo '{ 00:19:11.655 "subsystems": [ 00:19:11.655 { 00:19:11.655 "subsystem": "keyring", 00:19:11.655 "config": [ 00:19:11.655 { 00:19:11.655 "method": "keyring_file_add_key", 00:19:11.655 "params": { 00:19:11.655 "name": "key0", 00:19:11.655 "path": "/tmp/tmp.iRqze5yeVD" 00:19:11.655 } 00:19:11.655 } 00:19:11.655 ] 00:19:11.655 }, 00:19:11.655 { 00:19:11.655 "subsystem": "iobuf", 00:19:11.655 "config": [ 00:19:11.655 { 00:19:11.655 "method": "iobuf_set_options", 00:19:11.655 "params": { 00:19:11.655 "small_pool_count": 8192, 00:19:11.655 "large_pool_count": 1024, 00:19:11.655 "small_bufsize": 8192, 00:19:11.655 "large_bufsize": 135168, 00:19:11.655 "enable_numa": false 00:19:11.655 } 00:19:11.655 } 00:19:11.655 ] 00:19:11.655 }, 00:19:11.655 { 00:19:11.655 "subsystem": "sock", 00:19:11.655 "config": [ 00:19:11.655 { 00:19:11.655 "method": "sock_set_default_impl", 00:19:11.655 "params": { 00:19:11.655 "impl_name": "posix" 00:19:11.655 } 00:19:11.655 }, 00:19:11.655 { 00:19:11.655 "method": "sock_impl_set_options", 00:19:11.655 "params": { 00:19:11.655 "impl_name": "ssl", 00:19:11.655 "recv_buf_size": 4096, 00:19:11.655 "send_buf_size": 4096, 00:19:11.655 "enable_recv_pipe": true, 00:19:11.655 "enable_quickack": false, 00:19:11.655 "enable_placement_id": 0, 00:19:11.655 "enable_zerocopy_send_server": true, 00:19:11.655 "enable_zerocopy_send_client": false, 00:19:11.655 "zerocopy_threshold": 0, 00:19:11.655 "tls_version": 0, 00:19:11.655 "enable_ktls": false 00:19:11.655 } 00:19:11.655 }, 00:19:11.655 { 00:19:11.655 "method": "sock_impl_set_options", 00:19:11.655 "params": { 00:19:11.656 "impl_name": "posix", 00:19:11.656 "recv_buf_size": 2097152, 00:19:11.656 "send_buf_size": 2097152, 00:19:11.656 "enable_recv_pipe": true, 00:19:11.656 "enable_quickack": false, 00:19:11.656 "enable_placement_id": 0, 00:19:11.656 "enable_zerocopy_send_server": true, 00:19:11.656 "enable_zerocopy_send_client": false, 00:19:11.656 "zerocopy_threshold": 0, 00:19:11.656 "tls_version": 0, 00:19:11.656 "enable_ktls": false 00:19:11.656 } 00:19:11.656 } 00:19:11.656 ] 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "subsystem": "vmd", 00:19:11.656 "config": [] 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "subsystem": "accel", 00:19:11.656 "config": [ 00:19:11.656 { 00:19:11.656 "method": "accel_set_options", 00:19:11.656 "params": { 00:19:11.656 "small_cache_size": 128, 00:19:11.656 "large_cache_size": 16, 00:19:11.656 "task_count": 2048, 00:19:11.656 "sequence_count": 2048, 00:19:11.656 "buf_count": 2048 00:19:11.656 } 00:19:11.656 } 00:19:11.656 ] 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "subsystem": "bdev", 00:19:11.656 "config": [ 00:19:11.656 { 00:19:11.656 "method": "bdev_set_options", 00:19:11.656 "params": { 00:19:11.656 "bdev_io_pool_size": 65535, 00:19:11.656 "bdev_io_cache_size": 256, 00:19:11.656 "bdev_auto_examine": true, 00:19:11.656 "iobuf_small_cache_size": 128, 00:19:11.656 "iobuf_large_cache_size": 16 00:19:11.656 } 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "method": "bdev_raid_set_options", 00:19:11.656 "params": { 00:19:11.656 "process_window_size_kb": 1024, 00:19:11.656 "process_max_bandwidth_mb_sec": 0 00:19:11.656 } 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "method": "bdev_iscsi_set_options", 00:19:11.656 "params": { 00:19:11.656 "timeout_sec": 30 00:19:11.656 } 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "method": "bdev_nvme_set_options", 00:19:11.656 "params": { 00:19:11.656 "action_on_timeout": "none", 00:19:11.656 "timeout_us": 0, 00:19:11.656 "timeout_admin_us": 0, 00:19:11.656 "keep_alive_timeout_ms": 10000, 00:19:11.656 "arbitration_burst": 0, 00:19:11.656 "low_priority_weight": 0, 00:19:11.656 "medium_priority_weight": 0, 00:19:11.656 "high_priority_weight": 0, 00:19:11.656 "nvme_adminq_poll_period_us": 10000, 00:19:11.656 "nvme_ioq_poll_period_us": 0, 00:19:11.656 "io_queue_requests": 0, 00:19:11.656 "delay_cmd_submit": true, 00:19:11.656 "transport_retry_count": 4, 00:19:11.656 "bdev_retry_count": 3, 00:19:11.656 "transport_ack_timeout": 0, 00:19:11.656 "ctrlr_loss_timeout_sec": 0, 00:19:11.656 "reconnect_delay_sec": 0, 00:19:11.656 "fast_io_fail_timeout_sec": 0, 00:19:11.656 "disable_auto_failback": false, 00:19:11.656 "generate_uuids": false, 00:19:11.656 "transport_tos": 0, 00:19:11.656 "nvme_error_stat": false, 00:19:11.656 "rdma_srq_size": 0, 00:19:11.656 "io_path_stat": false, 00:19:11.656 "allow_accel_sequence": false, 00:19:11.656 "rdma_max_cq_size": 0, 00:19:11.656 "rdma_cm_event_timeout_ms": 0, 00:19:11.656 "dhchap_digests": [ 00:19:11.656 "sha256", 00:19:11.656 "sha384", 00:19:11.656 "sha512" 00:19:11.656 ], 00:19:11.656 "dhchap_dhgroups": [ 00:19:11.656 "null", 00:19:11.656 "ffdhe2048", 00:19:11.656 "ffdhe3072", 00:19:11.656 "ffdhe4096", 00:19:11.656 "ffdhe6144", 00:19:11.656 "ffdhe8192" 00:19:11.656 ] 00:19:11.656 } 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "method": "bdev_nvme_set_hotplug", 00:19:11.656 "params": { 00:19:11.656 "period_us": 100000, 00:19:11.656 "enable": false 00:19:11.656 } 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "method": "bdev_malloc_create", 00:19:11.656 "params": { 00:19:11.656 "name": "malloc0", 00:19:11.656 "num_blocks": 8192, 00:19:11.656 "block_size": 4096, 00:19:11.656 "physical_block_size": 4096, 00:19:11.656 "uuid": "d08fc8ad-e765-43db-9efd-f4bcb187d21f", 00:19:11.656 "optimal_io_boundary": 0, 00:19:11.656 "md_size": 0, 00:19:11.656 "dif_type": 0, 00:19:11.656 "dif_is_head_of_md": false, 00:19:11.656 "dif_pi_format": 0 00:19:11.656 } 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "method": "bdev_wait_for_examine" 00:19:11.656 } 00:19:11.656 ] 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "subsystem": "nbd", 00:19:11.656 "config": [] 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "subsystem": "scheduler", 00:19:11.656 "config": [ 00:19:11.656 { 00:19:11.656 "method": "framework_set_scheduler", 00:19:11.656 "params": { 00:19:11.656 "name": "static" 00:19:11.656 } 00:19:11.656 } 00:19:11.656 ] 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "subsystem": "nvmf", 00:19:11.656 "config": [ 00:19:11.656 { 00:19:11.656 "method": "nvmf_set_config", 00:19:11.656 "params": { 00:19:11.656 "discovery_filter": "match_any", 00:19:11.656 "admin_cmd_passthru": { 00:19:11.656 "identify_ctrlr": false 00:19:11.656 }, 00:19:11.656 "dhchap_digests": [ 00:19:11.656 "sha256", 00:19:11.656 "sha384", 00:19:11.656 "sha512" 00:19:11.656 ], 00:19:11.656 "dhchap_dhgroups": [ 00:19:11.656 "null", 00:19:11.656 "ffdhe2048", 00:19:11.656 "ffdhe3072", 00:19:11.656 "ffdhe4096", 00:19:11.656 "ffdhe6144", 00:19:11.656 "ffdhe8192" 00:19:11.656 ] 00:19:11.656 } 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "method": "nvmf_set_max_subsystems", 00:19:11.656 "params": { 00:19:11.656 "max_subsystems": 1024 00:19:11.656 } 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "method": "nvmf_set_crdt", 00:19:11.656 "params": { 00:19:11.656 "crdt1": 0, 00:19:11.656 "crdt2": 0, 00:19:11.656 "crdt3": 0 00:19:11.656 } 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "method": "nvmf_create_transport", 00:19:11.656 "params": { 00:19:11.656 "trtype": "TCP", 00:19:11.656 "max_queue_depth": 128, 00:19:11.656 "max_io_qpairs_per_ctrlr": 127, 00:19:11.656 "in_capsule_data_size": 4096, 00:19:11.656 "max_io_size": 131072, 00:19:11.656 "io_unit_size": 131072, 00:19:11.656 "max_aq_depth": 128, 00:19:11.656 "num_shared_buffers": 511, 00:19:11.656 "buf_cache_size": 4294967295, 00:19:11.656 "dif_insert_or_strip": false, 00:19:11.656 "zcopy": false, 00:19:11.656 "c2h_success": false, 00:19:11.656 "sock_priority": 0, 00:19:11.656 "abort_timeout_sec": 1, 00:19:11.656 "ack_timeout": 0, 00:19:11.656 "data_wr_pool_size": 0 00:19:11.656 } 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "method": "nvmf_create_subsystem", 00:19:11.656 "params": { 00:19:11.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.656 "allow_any_host": false, 00:19:11.656 "serial_number": "SPDK00000000000001", 00:19:11.656 "model_number": "SPDK bdev Controller", 00:19:11.656 "max_namespaces": 10, 00:19:11.656 "min_cntlid": 1, 00:19:11.656 "max_cntlid": 65519, 00:19:11.656 "ana_reporting": false 00:19:11.656 } 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "method": "nvmf_subsystem_add_host", 00:19:11.656 "params": { 00:19:11.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.656 "host": "nqn.2016-06.io.spdk:host1", 00:19:11.656 "psk": "key0" 00:19:11.656 } 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "method": "nvmf_subsystem_add_ns", 00:19:11.656 "params": { 00:19:11.656 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.656 "namespace": { 00:19:11.656 "nsid": 1, 00:19:11.656 "bdev_name": "malloc0", 00:19:11.656 "nguid": "D08FC8ADE76543DB9EFDF4BCB187D21F", 00:19:11.656 "uuid": "d08fc8ad-e765-43db-9efd-f4bcb187d21f", 00:19:11.656 "no_auto_visible": false 00:19:11.656 } 00:19:11.656 } 00:19:11.656 }, 00:19:11.656 { 00:19:11.656 "method": "nvmf_subsystem_add_listener", 00:19:11.656 "params": { 00:19:11.657 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.657 "listen_address": { 00:19:11.657 "trtype": "TCP", 00:19:11.657 "adrfam": "IPv4", 00:19:11.657 "traddr": "10.0.0.2", 00:19:11.657 "trsvcid": "4420" 00:19:11.657 }, 00:19:11.657 "secure_channel": true 00:19:11.657 } 00:19:11.657 } 00:19:11.657 ] 00:19:11.657 } 00:19:11.657 ] 00:19:11.657 }' 00:19:11.657 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.657 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=3247047 00:19:11.657 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 3247047 00:19:11.657 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:11.657 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3247047 ']' 00:19:11.657 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:11.657 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:11.657 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:11.657 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:11.657 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:11.657 10:36:52 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:11.657 [2024-11-20 10:36:52.212050] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:11.657 [2024-11-20 10:36:52.212095] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:11.657 [2024-11-20 10:36:52.286724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.657 [2024-11-20 10:36:52.326480] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:11.657 [2024-11-20 10:36:52.326514] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:11.657 [2024-11-20 10:36:52.326521] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:11.657 [2024-11-20 10:36:52.326527] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:11.657 [2024-11-20 10:36:52.326532] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:11.657 [2024-11-20 10:36:52.327086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:11.916 [2024-11-20 10:36:52.537703] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:11.916 [2024-11-20 10:36:52.569735] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:11.916 [2024-11-20 10:36:52.569926] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:12.484 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:12.484 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:12.484 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:12.484 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:12.484 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.484 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:12.484 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # bdevperf_pid=3247289 00:19:12.484 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@205 -- # waitforlisten 3247289 /var/tmp/bdevperf.sock 00:19:12.484 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3247289 ']' 00:19:12.484 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:12.484 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:12.484 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:12.484 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:12.484 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:12.484 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@201 -- # echo '{ 00:19:12.484 "subsystems": [ 00:19:12.484 { 00:19:12.484 "subsystem": "keyring", 00:19:12.484 "config": [ 00:19:12.484 { 00:19:12.484 "method": "keyring_file_add_key", 00:19:12.484 "params": { 00:19:12.484 "name": "key0", 00:19:12.484 "path": "/tmp/tmp.iRqze5yeVD" 00:19:12.484 } 00:19:12.484 } 00:19:12.484 ] 00:19:12.484 }, 00:19:12.484 { 00:19:12.484 "subsystem": "iobuf", 00:19:12.484 "config": [ 00:19:12.484 { 00:19:12.484 "method": "iobuf_set_options", 00:19:12.484 "params": { 00:19:12.484 "small_pool_count": 8192, 00:19:12.484 "large_pool_count": 1024, 00:19:12.484 "small_bufsize": 8192, 00:19:12.484 "large_bufsize": 135168, 00:19:12.484 "enable_numa": false 00:19:12.484 } 00:19:12.484 } 00:19:12.484 ] 00:19:12.484 }, 00:19:12.484 { 00:19:12.484 "subsystem": "sock", 00:19:12.484 "config": [ 00:19:12.484 { 00:19:12.484 "method": "sock_set_default_impl", 00:19:12.484 "params": { 00:19:12.484 "impl_name": "posix" 00:19:12.484 } 00:19:12.484 }, 00:19:12.484 { 00:19:12.484 "method": "sock_impl_set_options", 00:19:12.484 "params": { 00:19:12.484 "impl_name": "ssl", 00:19:12.484 "recv_buf_size": 4096, 00:19:12.484 "send_buf_size": 4096, 00:19:12.484 "enable_recv_pipe": true, 00:19:12.484 "enable_quickack": false, 00:19:12.484 "enable_placement_id": 0, 00:19:12.484 "enable_zerocopy_send_server": true, 00:19:12.484 "enable_zerocopy_send_client": false, 00:19:12.484 "zerocopy_threshold": 0, 00:19:12.484 "tls_version": 0, 00:19:12.484 "enable_ktls": false 00:19:12.484 } 00:19:12.484 }, 00:19:12.484 { 00:19:12.484 "method": "sock_impl_set_options", 00:19:12.484 "params": { 00:19:12.484 "impl_name": "posix", 00:19:12.484 "recv_buf_size": 2097152, 00:19:12.484 "send_buf_size": 2097152, 00:19:12.484 "enable_recv_pipe": true, 00:19:12.484 "enable_quickack": false, 00:19:12.484 "enable_placement_id": 0, 00:19:12.484 "enable_zerocopy_send_server": true, 00:19:12.484 "enable_zerocopy_send_client": false, 00:19:12.484 "zerocopy_threshold": 0, 00:19:12.484 "tls_version": 0, 00:19:12.484 "enable_ktls": false 00:19:12.484 } 00:19:12.484 } 00:19:12.484 ] 00:19:12.484 }, 00:19:12.484 { 00:19:12.484 "subsystem": "vmd", 00:19:12.484 "config": [] 00:19:12.484 }, 00:19:12.484 { 00:19:12.484 "subsystem": "accel", 00:19:12.484 "config": [ 00:19:12.484 { 00:19:12.484 "method": "accel_set_options", 00:19:12.484 "params": { 00:19:12.484 "small_cache_size": 128, 00:19:12.484 "large_cache_size": 16, 00:19:12.484 "task_count": 2048, 00:19:12.484 "sequence_count": 2048, 00:19:12.484 "buf_count": 2048 00:19:12.484 } 00:19:12.484 } 00:19:12.484 ] 00:19:12.484 }, 00:19:12.484 { 00:19:12.484 "subsystem": "bdev", 00:19:12.484 "config": [ 00:19:12.484 { 00:19:12.484 "method": "bdev_set_options", 00:19:12.484 "params": { 00:19:12.484 "bdev_io_pool_size": 65535, 00:19:12.484 "bdev_io_cache_size": 256, 00:19:12.484 "bdev_auto_examine": true, 00:19:12.484 "iobuf_small_cache_size": 128, 00:19:12.484 "iobuf_large_cache_size": 16 00:19:12.484 } 00:19:12.484 }, 00:19:12.484 { 00:19:12.484 "method": "bdev_raid_set_options", 00:19:12.484 "params": { 00:19:12.484 "process_window_size_kb": 1024, 00:19:12.484 "process_max_bandwidth_mb_sec": 0 00:19:12.484 } 00:19:12.484 }, 00:19:12.484 { 00:19:12.484 "method": "bdev_iscsi_set_options", 00:19:12.484 "params": { 00:19:12.484 "timeout_sec": 30 00:19:12.484 } 00:19:12.484 }, 00:19:12.484 { 00:19:12.484 "method": "bdev_nvme_set_options", 00:19:12.484 "params": { 00:19:12.484 "action_on_timeout": "none", 00:19:12.484 "timeout_us": 0, 00:19:12.484 "timeout_admin_us": 0, 00:19:12.484 "keep_alive_timeout_ms": 10000, 00:19:12.484 "arbitration_burst": 0, 00:19:12.484 "low_priority_weight": 0, 00:19:12.484 "medium_priority_weight": 0, 00:19:12.484 "high_priority_weight": 0, 00:19:12.484 "nvme_adminq_poll_period_us": 10000, 00:19:12.484 "nvme_ioq_poll_period_us": 0, 00:19:12.484 "io_queue_requests": 512, 00:19:12.484 "delay_cmd_submit": true, 00:19:12.484 "transport_retry_count": 4, 00:19:12.484 "bdev_retry_count": 3, 00:19:12.484 "transport_ack_timeout": 0, 00:19:12.484 "ctrlr_loss_timeout_sec": 0, 00:19:12.484 "reconnect_delay_sec": 0, 00:19:12.484 "fast_io_fail_timeout_sec": 0, 00:19:12.484 "disable_auto_failback": false, 00:19:12.484 "generate_uuids": false, 00:19:12.484 "transport_tos": 0, 00:19:12.484 "nvme_error_stat": false, 00:19:12.484 "rdma_srq_size": 0, 00:19:12.484 "io_path_stat": false, 00:19:12.484 "allow_accel_sequence": false, 00:19:12.484 "rdma_max_cq_size": 0, 00:19:12.484 "rdma_cm_event_timeout_ms": 0, 00:19:12.484 "dhchap_digests": [ 00:19:12.484 "sha256", 00:19:12.484 "sha384", 00:19:12.484 "sha512" 00:19:12.484 ], 00:19:12.484 "dhchap_dhgroups": [ 00:19:12.484 "null", 00:19:12.484 "ffdhe2048", 00:19:12.484 "ffdhe3072", 00:19:12.484 "ffdhe4096", 00:19:12.484 "ffdhe6144", 00:19:12.484 "ffdhe8192" 00:19:12.484 ] 00:19:12.484 } 00:19:12.484 }, 00:19:12.484 { 00:19:12.485 "method": "bdev_nvme_attach_controller", 00:19:12.485 "params": { 00:19:12.485 "name": "TLSTEST", 00:19:12.485 "trtype": "TCP", 00:19:12.485 "adrfam": "IPv4", 00:19:12.485 "traddr": "10.0.0.2", 00:19:12.485 "trsvcid": "4420", 00:19:12.485 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:12.485 "prchk_reftag": false, 00:19:12.485 "prchk_guard": false, 00:19:12.485 "ctrlr_loss_timeout_sec": 0, 00:19:12.485 "reconnect_delay_sec": 0, 00:19:12.485 "fast_io_fail_timeout_sec": 0, 00:19:12.485 "psk": "key0", 00:19:12.485 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:12.485 "hdgst": false, 00:19:12.485 "ddgst": false, 00:19:12.485 "multipath": "multipath" 00:19:12.485 } 00:19:12.485 }, 00:19:12.485 { 00:19:12.485 "method": "bdev_nvme_set_hotplug", 00:19:12.485 "params": { 00:19:12.485 "period_us": 100000, 00:19:12.485 "enable": false 00:19:12.485 } 00:19:12.485 }, 00:19:12.485 { 00:19:12.485 "method": "bdev_wait_for_examine" 00:19:12.485 } 00:19:12.485 ] 00:19:12.485 }, 00:19:12.485 { 00:19:12.485 "subsystem": "nbd", 00:19:12.485 "config": [] 00:19:12.485 } 00:19:12.485 ] 00:19:12.485 }' 00:19:12.485 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:12.485 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.485 [2024-11-20 10:36:53.116459] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:12.485 [2024-11-20 10:36:53.116509] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3247289 ] 00:19:12.485 [2024-11-20 10:36:53.188421] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.744 [2024-11-20 10:36:53.228781] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:12.744 [2024-11-20 10:36:53.380458] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:13.310 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:13.310 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:13.310 10:36:53 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:13.567 Running I/O for 10 seconds... 00:19:15.434 5445.00 IOPS, 21.27 MiB/s [2024-11-20T09:36:57.100Z] 5512.50 IOPS, 21.53 MiB/s [2024-11-20T09:36:58.474Z] 5514.00 IOPS, 21.54 MiB/s [2024-11-20T09:36:59.406Z] 5528.50 IOPS, 21.60 MiB/s [2024-11-20T09:37:00.341Z] 5545.40 IOPS, 21.66 MiB/s [2024-11-20T09:37:01.274Z] 5548.33 IOPS, 21.67 MiB/s [2024-11-20T09:37:02.210Z] 5523.57 IOPS, 21.58 MiB/s [2024-11-20T09:37:03.145Z] 5543.00 IOPS, 21.65 MiB/s [2024-11-20T09:37:04.079Z] 5553.44 IOPS, 21.69 MiB/s [2024-11-20T09:37:04.338Z] 5562.20 IOPS, 21.73 MiB/s 00:19:23.607 Latency(us) 00:19:23.607 [2024-11-20T09:37:04.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.607 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:23.607 Verification LBA range: start 0x0 length 0x2000 00:19:23.607 TLSTESTn1 : 10.02 5564.02 21.73 0.00 0.00 22965.36 6147.90 36949.82 00:19:23.607 [2024-11-20T09:37:04.338Z] =================================================================================================================== 00:19:23.607 [2024-11-20T09:37:04.338Z] Total : 5564.02 21.73 0.00 0.00 22965.36 6147.90 36949.82 00:19:23.607 { 00:19:23.607 "results": [ 00:19:23.607 { 00:19:23.607 "job": "TLSTESTn1", 00:19:23.607 "core_mask": "0x4", 00:19:23.607 "workload": "verify", 00:19:23.607 "status": "finished", 00:19:23.607 "verify_range": { 00:19:23.607 "start": 0, 00:19:23.607 "length": 8192 00:19:23.607 }, 00:19:23.607 "queue_depth": 128, 00:19:23.607 "io_size": 4096, 00:19:23.607 "runtime": 10.019558, 00:19:23.607 "iops": 5564.0178938033, 00:19:23.607 "mibps": 21.73444489766914, 00:19:23.607 "io_failed": 0, 00:19:23.607 "io_timeout": 0, 00:19:23.607 "avg_latency_us": 22965.355686585026, 00:19:23.607 "min_latency_us": 6147.900952380953, 00:19:23.607 "max_latency_us": 36949.82095238095 00:19:23.607 } 00:19:23.607 ], 00:19:23.607 "core_count": 1 00:19:23.607 } 00:19:23.607 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@210 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:23.607 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # killprocess 3247289 00:19:23.607 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3247289 ']' 00:19:23.607 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3247289 00:19:23.607 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:23.607 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.607 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3247289 00:19:23.607 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:23.607 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:23.607 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3247289' 00:19:23.607 killing process with pid 3247289 00:19:23.607 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3247289 00:19:23.607 Received shutdown signal, test time was about 10.000000 seconds 00:19:23.607 00:19:23.607 Latency(us) 00:19:23.607 [2024-11-20T09:37:04.338Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:23.607 [2024-11-20T09:37:04.338Z] =================================================================================================================== 00:19:23.607 [2024-11-20T09:37:04.339Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:23.608 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3247289 00:19:23.608 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@212 -- # killprocess 3247047 00:19:23.608 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3247047 ']' 00:19:23.608 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3247047 00:19:23.608 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:23.608 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:23.867 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3247047 00:19:23.867 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:23.867 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:23.867 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3247047' 00:19:23.867 killing process with pid 3247047 00:19:23.867 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3247047 00:19:23.867 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3247047 00:19:23.867 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # nvmfappstart 00:19:23.867 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:23.867 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:23.867 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:23.867 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=3249138 00:19:23.867 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:23.867 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 3249138 00:19:23.867 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3249138 ']' 00:19:23.867 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.867 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.867 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.867 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.867 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.867 10:37:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:24.126 [2024-11-20 10:37:04.604462] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:24.126 [2024-11-20 10:37:04.604512] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:24.126 [2024-11-20 10:37:04.680877] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:24.126 [2024-11-20 10:37:04.717488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:24.126 [2024-11-20 10:37:04.717520] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:24.126 [2024-11-20 10:37:04.717528] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:24.126 [2024-11-20 10:37:04.717534] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:24.126 [2024-11-20 10:37:04.717539] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:24.126 [2024-11-20 10:37:04.718106] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:25.061 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:25.061 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:25.061 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:25.061 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:25.061 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:25.061 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:25.061 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@216 -- # setup_nvmf_tgt /tmp/tmp.iRqze5yeVD 00:19:25.061 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@50 -- # local key=/tmp/tmp.iRqze5yeVD 00:19:25.061 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:25.061 [2024-11-20 10:37:05.642681] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:25.061 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:25.320 10:37:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:25.320 [2024-11-20 10:37:06.031663] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:25.320 [2024-11-20 10:37:06.031872] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:25.579 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:25.579 malloc0 00:19:25.579 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:25.837 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py keyring_file_add_key key0 /tmp/tmp.iRqze5yeVD 00:19:26.096 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk key0 00:19:26.354 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # bdevperf_pid=3249469 00:19:26.354 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@217 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:26.354 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@221 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:26.354 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # waitforlisten 3249469 /var/tmp/bdevperf.sock 00:19:26.354 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3249469 ']' 00:19:26.354 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:26.354 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:26.354 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:26.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:26.354 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:26.354 10:37:06 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:26.354 [2024-11-20 10:37:06.891145] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:26.354 [2024-11-20 10:37:06.891198] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3249469 ] 00:19:26.354 [2024-11-20 10:37:06.967594] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:26.354 [2024-11-20 10:37:07.008717] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:26.612 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:26.612 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:26.612 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iRqze5yeVD 00:19:26.613 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:26.871 [2024-11-20 10:37:07.463267] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:26.871 nvme0n1 00:19:26.871 10:37:07 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@229 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:27.128 Running I/O for 1 seconds... 00:19:28.064 5325.00 IOPS, 20.80 MiB/s 00:19:28.064 Latency(us) 00:19:28.064 [2024-11-20T09:37:08.795Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.064 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:28.064 Verification LBA range: start 0x0 length 0x2000 00:19:28.064 nvme0n1 : 1.02 5359.47 20.94 0.00 0.00 23688.08 4868.39 20721.86 00:19:28.064 [2024-11-20T09:37:08.795Z] =================================================================================================================== 00:19:28.064 [2024-11-20T09:37:08.795Z] Total : 5359.47 20.94 0.00 0.00 23688.08 4868.39 20721.86 00:19:28.064 { 00:19:28.064 "results": [ 00:19:28.064 { 00:19:28.064 "job": "nvme0n1", 00:19:28.064 "core_mask": "0x2", 00:19:28.064 "workload": "verify", 00:19:28.064 "status": "finished", 00:19:28.064 "verify_range": { 00:19:28.064 "start": 0, 00:19:28.064 "length": 8192 00:19:28.064 }, 00:19:28.064 "queue_depth": 128, 00:19:28.064 "io_size": 4096, 00:19:28.064 "runtime": 1.017637, 00:19:28.064 "iops": 5359.474940474845, 00:19:28.064 "mibps": 20.935448986229865, 00:19:28.064 "io_failed": 0, 00:19:28.064 "io_timeout": 0, 00:19:28.065 "avg_latency_us": 23688.08487401121, 00:19:28.065 "min_latency_us": 4868.388571428572, 00:19:28.065 "max_latency_us": 20721.859047619047 00:19:28.065 } 00:19:28.065 ], 00:19:28.065 "core_count": 1 00:19:28.065 } 00:19:28.065 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@231 -- # killprocess 3249469 00:19:28.065 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3249469 ']' 00:19:28.065 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3249469 00:19:28.065 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:28.065 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.065 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3249469 00:19:28.065 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:28.065 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:28.065 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3249469' 00:19:28.065 killing process with pid 3249469 00:19:28.065 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3249469 00:19:28.065 Received shutdown signal, test time was about 1.000000 seconds 00:19:28.065 00:19:28.065 Latency(us) 00:19:28.065 [2024-11-20T09:37:08.796Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.065 [2024-11-20T09:37:08.796Z] =================================================================================================================== 00:19:28.065 [2024-11-20T09:37:08.796Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:28.065 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3249469 00:19:28.324 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # killprocess 3249138 00:19:28.324 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3249138 ']' 00:19:28.324 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3249138 00:19:28.324 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:28.324 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:28.324 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3249138 00:19:28.324 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:28.324 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:28.324 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3249138' 00:19:28.324 killing process with pid 3249138 00:19:28.324 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3249138 00:19:28.324 10:37:08 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3249138 00:19:28.583 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@237 -- # nvmfappstart 00:19:28.583 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:28.583 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:28.583 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.583 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=3249868 00:19:28.583 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:28.583 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 3249868 00:19:28.583 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3249868 ']' 00:19:28.583 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:28.583 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.583 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:28.583 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:28.583 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.583 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.583 [2024-11-20 10:37:09.168701] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:28.583 [2024-11-20 10:37:09.168750] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:28.583 [2024-11-20 10:37:09.243763] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.583 [2024-11-20 10:37:09.283701] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:28.583 [2024-11-20 10:37:09.283739] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:28.583 [2024-11-20 10:37:09.283746] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:28.583 [2024-11-20 10:37:09.283752] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:28.583 [2024-11-20 10:37:09.283758] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:28.583 [2024-11-20 10:37:09.284360] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:28.841 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:28.841 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:28.841 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:28.841 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:28.841 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.841 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:28.841 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@238 -- # rpc_cmd 00:19:28.841 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:28.841 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.841 [2024-11-20 10:37:09.419395] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:28.841 malloc0 00:19:28.841 [2024-11-20 10:37:09.447588] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:28.841 [2024-11-20 10:37:09.447788] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:28.841 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:28.841 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@251 -- # bdevperf_pid=3249890 00:19:28.841 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@253 -- # waitforlisten 3249890 /var/tmp/bdevperf.sock 00:19:28.841 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@249 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:28.842 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3249890 ']' 00:19:28.842 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.842 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:28.842 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.842 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.842 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:28.842 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.842 [2024-11-20 10:37:09.522457] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:28.842 [2024-11-20 10:37:09.522498] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3249890 ] 00:19:29.100 [2024-11-20 10:37:09.596933] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.100 [2024-11-20 10:37:09.638114] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:29.100 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:29.100 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:29.100 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.iRqze5yeVD 00:19:29.359 10:37:09 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@255 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:29.617 [2024-11-20 10:37:10.105636] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:29.617 nvme0n1 00:19:29.617 10:37:10 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@259 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:29.617 Running I/O for 1 seconds... 00:19:30.993 5405.00 IOPS, 21.11 MiB/s 00:19:30.993 Latency(us) 00:19:30.993 [2024-11-20T09:37:11.724Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:30.993 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:30.993 Verification LBA range: start 0x0 length 0x2000 00:19:30.993 nvme0n1 : 1.02 5449.92 21.29 0.00 0.00 23299.47 7396.21 24092.28 00:19:30.993 [2024-11-20T09:37:11.724Z] =================================================================================================================== 00:19:30.993 [2024-11-20T09:37:11.724Z] Total : 5449.92 21.29 0.00 0.00 23299.47 7396.21 24092.28 00:19:30.993 { 00:19:30.993 "results": [ 00:19:30.993 { 00:19:30.993 "job": "nvme0n1", 00:19:30.993 "core_mask": "0x2", 00:19:30.993 "workload": "verify", 00:19:30.993 "status": "finished", 00:19:30.993 "verify_range": { 00:19:30.993 "start": 0, 00:19:30.993 "length": 8192 00:19:30.993 }, 00:19:30.993 "queue_depth": 128, 00:19:30.993 "io_size": 4096, 00:19:30.993 "runtime": 1.015244, 00:19:30.993 "iops": 5449.921398205752, 00:19:30.993 "mibps": 21.288755461741218, 00:19:30.993 "io_failed": 0, 00:19:30.993 "io_timeout": 0, 00:19:30.993 "avg_latency_us": 23299.46568209789, 00:19:30.993 "min_latency_us": 7396.205714285714, 00:19:30.993 "max_latency_us": 24092.281904761905 00:19:30.993 } 00:19:30.993 ], 00:19:30.993 "core_count": 1 00:19:30.993 } 00:19:30.993 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # rpc_cmd save_config 00:19:30.993 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@563 -- # xtrace_disable 00:19:30.993 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.993 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:19:30.993 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # tgtcfg='{ 00:19:30.993 "subsystems": [ 00:19:30.993 { 00:19:30.993 "subsystem": "keyring", 00:19:30.993 "config": [ 00:19:30.993 { 00:19:30.993 "method": "keyring_file_add_key", 00:19:30.993 "params": { 00:19:30.993 "name": "key0", 00:19:30.993 "path": "/tmp/tmp.iRqze5yeVD" 00:19:30.993 } 00:19:30.993 } 00:19:30.993 ] 00:19:30.993 }, 00:19:30.993 { 00:19:30.993 "subsystem": "iobuf", 00:19:30.993 "config": [ 00:19:30.993 { 00:19:30.993 "method": "iobuf_set_options", 00:19:30.993 "params": { 00:19:30.993 "small_pool_count": 8192, 00:19:30.993 "large_pool_count": 1024, 00:19:30.993 "small_bufsize": 8192, 00:19:30.993 "large_bufsize": 135168, 00:19:30.993 "enable_numa": false 00:19:30.993 } 00:19:30.993 } 00:19:30.993 ] 00:19:30.993 }, 00:19:30.993 { 00:19:30.993 "subsystem": "sock", 00:19:30.993 "config": [ 00:19:30.993 { 00:19:30.993 "method": "sock_set_default_impl", 00:19:30.993 "params": { 00:19:30.993 "impl_name": "posix" 00:19:30.993 } 00:19:30.993 }, 00:19:30.993 { 00:19:30.993 "method": "sock_impl_set_options", 00:19:30.993 "params": { 00:19:30.993 "impl_name": "ssl", 00:19:30.993 "recv_buf_size": 4096, 00:19:30.993 "send_buf_size": 4096, 00:19:30.993 "enable_recv_pipe": true, 00:19:30.993 "enable_quickack": false, 00:19:30.993 "enable_placement_id": 0, 00:19:30.993 "enable_zerocopy_send_server": true, 00:19:30.993 "enable_zerocopy_send_client": false, 00:19:30.993 "zerocopy_threshold": 0, 00:19:30.993 "tls_version": 0, 00:19:30.993 "enable_ktls": false 00:19:30.993 } 00:19:30.993 }, 00:19:30.993 { 00:19:30.993 "method": "sock_impl_set_options", 00:19:30.993 "params": { 00:19:30.993 "impl_name": "posix", 00:19:30.993 "recv_buf_size": 2097152, 00:19:30.993 "send_buf_size": 2097152, 00:19:30.993 "enable_recv_pipe": true, 00:19:30.993 "enable_quickack": false, 00:19:30.993 "enable_placement_id": 0, 00:19:30.993 "enable_zerocopy_send_server": true, 00:19:30.993 "enable_zerocopy_send_client": false, 00:19:30.993 "zerocopy_threshold": 0, 00:19:30.993 "tls_version": 0, 00:19:30.993 "enable_ktls": false 00:19:30.993 } 00:19:30.993 } 00:19:30.993 ] 00:19:30.993 }, 00:19:30.993 { 00:19:30.993 "subsystem": "vmd", 00:19:30.993 "config": [] 00:19:30.993 }, 00:19:30.993 { 00:19:30.993 "subsystem": "accel", 00:19:30.993 "config": [ 00:19:30.993 { 00:19:30.993 "method": "accel_set_options", 00:19:30.993 "params": { 00:19:30.993 "small_cache_size": 128, 00:19:30.993 "large_cache_size": 16, 00:19:30.993 "task_count": 2048, 00:19:30.993 "sequence_count": 2048, 00:19:30.993 "buf_count": 2048 00:19:30.993 } 00:19:30.993 } 00:19:30.993 ] 00:19:30.993 }, 00:19:30.993 { 00:19:30.993 "subsystem": "bdev", 00:19:30.993 "config": [ 00:19:30.993 { 00:19:30.993 "method": "bdev_set_options", 00:19:30.993 "params": { 00:19:30.993 "bdev_io_pool_size": 65535, 00:19:30.993 "bdev_io_cache_size": 256, 00:19:30.993 "bdev_auto_examine": true, 00:19:30.993 "iobuf_small_cache_size": 128, 00:19:30.993 "iobuf_large_cache_size": 16 00:19:30.993 } 00:19:30.993 }, 00:19:30.993 { 00:19:30.993 "method": "bdev_raid_set_options", 00:19:30.993 "params": { 00:19:30.993 "process_window_size_kb": 1024, 00:19:30.993 "process_max_bandwidth_mb_sec": 0 00:19:30.993 } 00:19:30.993 }, 00:19:30.993 { 00:19:30.993 "method": "bdev_iscsi_set_options", 00:19:30.993 "params": { 00:19:30.993 "timeout_sec": 30 00:19:30.993 } 00:19:30.993 }, 00:19:30.993 { 00:19:30.993 "method": "bdev_nvme_set_options", 00:19:30.993 "params": { 00:19:30.993 "action_on_timeout": "none", 00:19:30.993 "timeout_us": 0, 00:19:30.993 "timeout_admin_us": 0, 00:19:30.993 "keep_alive_timeout_ms": 10000, 00:19:30.993 "arbitration_burst": 0, 00:19:30.993 "low_priority_weight": 0, 00:19:30.993 "medium_priority_weight": 0, 00:19:30.993 "high_priority_weight": 0, 00:19:30.993 "nvme_adminq_poll_period_us": 10000, 00:19:30.993 "nvme_ioq_poll_period_us": 0, 00:19:30.993 "io_queue_requests": 0, 00:19:30.993 "delay_cmd_submit": true, 00:19:30.993 "transport_retry_count": 4, 00:19:30.993 "bdev_retry_count": 3, 00:19:30.993 "transport_ack_timeout": 0, 00:19:30.993 "ctrlr_loss_timeout_sec": 0, 00:19:30.993 "reconnect_delay_sec": 0, 00:19:30.993 "fast_io_fail_timeout_sec": 0, 00:19:30.993 "disable_auto_failback": false, 00:19:30.993 "generate_uuids": false, 00:19:30.993 "transport_tos": 0, 00:19:30.993 "nvme_error_stat": false, 00:19:30.993 "rdma_srq_size": 0, 00:19:30.993 "io_path_stat": false, 00:19:30.993 "allow_accel_sequence": false, 00:19:30.993 "rdma_max_cq_size": 0, 00:19:30.993 "rdma_cm_event_timeout_ms": 0, 00:19:30.993 "dhchap_digests": [ 00:19:30.993 "sha256", 00:19:30.993 "sha384", 00:19:30.993 "sha512" 00:19:30.993 ], 00:19:30.993 "dhchap_dhgroups": [ 00:19:30.993 "null", 00:19:30.993 "ffdhe2048", 00:19:30.993 "ffdhe3072", 00:19:30.993 "ffdhe4096", 00:19:30.993 "ffdhe6144", 00:19:30.993 "ffdhe8192" 00:19:30.993 ] 00:19:30.993 } 00:19:30.993 }, 00:19:30.993 { 00:19:30.993 "method": "bdev_nvme_set_hotplug", 00:19:30.993 "params": { 00:19:30.993 "period_us": 100000, 00:19:30.993 "enable": false 00:19:30.993 } 00:19:30.993 }, 00:19:30.993 { 00:19:30.993 "method": "bdev_malloc_create", 00:19:30.993 "params": { 00:19:30.993 "name": "malloc0", 00:19:30.993 "num_blocks": 8192, 00:19:30.993 "block_size": 4096, 00:19:30.993 "physical_block_size": 4096, 00:19:30.993 "uuid": "a775ff4d-ced2-4b32-a49b-34603de06e2a", 00:19:30.993 "optimal_io_boundary": 0, 00:19:30.993 "md_size": 0, 00:19:30.993 "dif_type": 0, 00:19:30.993 "dif_is_head_of_md": false, 00:19:30.993 "dif_pi_format": 0 00:19:30.993 } 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "method": "bdev_wait_for_examine" 00:19:30.994 } 00:19:30.994 ] 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "subsystem": "nbd", 00:19:30.994 "config": [] 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "subsystem": "scheduler", 00:19:30.994 "config": [ 00:19:30.994 { 00:19:30.994 "method": "framework_set_scheduler", 00:19:30.994 "params": { 00:19:30.994 "name": "static" 00:19:30.994 } 00:19:30.994 } 00:19:30.994 ] 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "subsystem": "nvmf", 00:19:30.994 "config": [ 00:19:30.994 { 00:19:30.994 "method": "nvmf_set_config", 00:19:30.994 "params": { 00:19:30.994 "discovery_filter": "match_any", 00:19:30.994 "admin_cmd_passthru": { 00:19:30.994 "identify_ctrlr": false 00:19:30.994 }, 00:19:30.994 "dhchap_digests": [ 00:19:30.994 "sha256", 00:19:30.994 "sha384", 00:19:30.994 "sha512" 00:19:30.994 ], 00:19:30.994 "dhchap_dhgroups": [ 00:19:30.994 "null", 00:19:30.994 "ffdhe2048", 00:19:30.994 "ffdhe3072", 00:19:30.994 "ffdhe4096", 00:19:30.994 "ffdhe6144", 00:19:30.994 "ffdhe8192" 00:19:30.994 ] 00:19:30.994 } 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "method": "nvmf_set_max_subsystems", 00:19:30.994 "params": { 00:19:30.994 "max_subsystems": 1024 00:19:30.994 } 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "method": "nvmf_set_crdt", 00:19:30.994 "params": { 00:19:30.994 "crdt1": 0, 00:19:30.994 "crdt2": 0, 00:19:30.994 "crdt3": 0 00:19:30.994 } 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "method": "nvmf_create_transport", 00:19:30.994 "params": { 00:19:30.994 "trtype": "TCP", 00:19:30.994 "max_queue_depth": 128, 00:19:30.994 "max_io_qpairs_per_ctrlr": 127, 00:19:30.994 "in_capsule_data_size": 4096, 00:19:30.994 "max_io_size": 131072, 00:19:30.994 "io_unit_size": 131072, 00:19:30.994 "max_aq_depth": 128, 00:19:30.994 "num_shared_buffers": 511, 00:19:30.994 "buf_cache_size": 4294967295, 00:19:30.994 "dif_insert_or_strip": false, 00:19:30.994 "zcopy": false, 00:19:30.994 "c2h_success": false, 00:19:30.994 "sock_priority": 0, 00:19:30.994 "abort_timeout_sec": 1, 00:19:30.994 "ack_timeout": 0, 00:19:30.994 "data_wr_pool_size": 0 00:19:30.994 } 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "method": "nvmf_create_subsystem", 00:19:30.994 "params": { 00:19:30.994 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.994 "allow_any_host": false, 00:19:30.994 "serial_number": "00000000000000000000", 00:19:30.994 "model_number": "SPDK bdev Controller", 00:19:30.994 "max_namespaces": 32, 00:19:30.994 "min_cntlid": 1, 00:19:30.994 "max_cntlid": 65519, 00:19:30.994 "ana_reporting": false 00:19:30.994 } 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "method": "nvmf_subsystem_add_host", 00:19:30.994 "params": { 00:19:30.994 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.994 "host": "nqn.2016-06.io.spdk:host1", 00:19:30.994 "psk": "key0" 00:19:30.994 } 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "method": "nvmf_subsystem_add_ns", 00:19:30.994 "params": { 00:19:30.994 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.994 "namespace": { 00:19:30.994 "nsid": 1, 00:19:30.994 "bdev_name": "malloc0", 00:19:30.994 "nguid": "A775FF4DCED24B32A49B34603DE06E2A", 00:19:30.994 "uuid": "a775ff4d-ced2-4b32-a49b-34603de06e2a", 00:19:30.994 "no_auto_visible": false 00:19:30.994 } 00:19:30.994 } 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "method": "nvmf_subsystem_add_listener", 00:19:30.994 "params": { 00:19:30.994 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.994 "listen_address": { 00:19:30.994 "trtype": "TCP", 00:19:30.994 "adrfam": "IPv4", 00:19:30.994 "traddr": "10.0.0.2", 00:19:30.994 "trsvcid": "4420" 00:19:30.994 }, 00:19:30.994 "secure_channel": false, 00:19:30.994 "sock_impl": "ssl" 00:19:30.994 } 00:19:30.994 } 00:19:30.994 ] 00:19:30.994 } 00:19:30.994 ] 00:19:30.994 }' 00:19:30.994 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@263 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:30.994 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@263 -- # bperfcfg='{ 00:19:30.994 "subsystems": [ 00:19:30.994 { 00:19:30.994 "subsystem": "keyring", 00:19:30.994 "config": [ 00:19:30.994 { 00:19:30.994 "method": "keyring_file_add_key", 00:19:30.994 "params": { 00:19:30.994 "name": "key0", 00:19:30.994 "path": "/tmp/tmp.iRqze5yeVD" 00:19:30.994 } 00:19:30.994 } 00:19:30.994 ] 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "subsystem": "iobuf", 00:19:30.994 "config": [ 00:19:30.994 { 00:19:30.994 "method": "iobuf_set_options", 00:19:30.994 "params": { 00:19:30.994 "small_pool_count": 8192, 00:19:30.994 "large_pool_count": 1024, 00:19:30.994 "small_bufsize": 8192, 00:19:30.994 "large_bufsize": 135168, 00:19:30.994 "enable_numa": false 00:19:30.994 } 00:19:30.994 } 00:19:30.994 ] 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "subsystem": "sock", 00:19:30.994 "config": [ 00:19:30.994 { 00:19:30.994 "method": "sock_set_default_impl", 00:19:30.994 "params": { 00:19:30.994 "impl_name": "posix" 00:19:30.994 } 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "method": "sock_impl_set_options", 00:19:30.994 "params": { 00:19:30.994 "impl_name": "ssl", 00:19:30.994 "recv_buf_size": 4096, 00:19:30.994 "send_buf_size": 4096, 00:19:30.994 "enable_recv_pipe": true, 00:19:30.994 "enable_quickack": false, 00:19:30.994 "enable_placement_id": 0, 00:19:30.994 "enable_zerocopy_send_server": true, 00:19:30.994 "enable_zerocopy_send_client": false, 00:19:30.994 "zerocopy_threshold": 0, 00:19:30.994 "tls_version": 0, 00:19:30.994 "enable_ktls": false 00:19:30.994 } 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "method": "sock_impl_set_options", 00:19:30.994 "params": { 00:19:30.994 "impl_name": "posix", 00:19:30.994 "recv_buf_size": 2097152, 00:19:30.994 "send_buf_size": 2097152, 00:19:30.994 "enable_recv_pipe": true, 00:19:30.994 "enable_quickack": false, 00:19:30.994 "enable_placement_id": 0, 00:19:30.994 "enable_zerocopy_send_server": true, 00:19:30.994 "enable_zerocopy_send_client": false, 00:19:30.994 "zerocopy_threshold": 0, 00:19:30.994 "tls_version": 0, 00:19:30.994 "enable_ktls": false 00:19:30.994 } 00:19:30.994 } 00:19:30.994 ] 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "subsystem": "vmd", 00:19:30.994 "config": [] 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "subsystem": "accel", 00:19:30.994 "config": [ 00:19:30.994 { 00:19:30.994 "method": "accel_set_options", 00:19:30.994 "params": { 00:19:30.994 "small_cache_size": 128, 00:19:30.994 "large_cache_size": 16, 00:19:30.994 "task_count": 2048, 00:19:30.994 "sequence_count": 2048, 00:19:30.994 "buf_count": 2048 00:19:30.994 } 00:19:30.994 } 00:19:30.994 ] 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "subsystem": "bdev", 00:19:30.994 "config": [ 00:19:30.994 { 00:19:30.994 "method": "bdev_set_options", 00:19:30.994 "params": { 00:19:30.994 "bdev_io_pool_size": 65535, 00:19:30.994 "bdev_io_cache_size": 256, 00:19:30.994 "bdev_auto_examine": true, 00:19:30.994 "iobuf_small_cache_size": 128, 00:19:30.994 "iobuf_large_cache_size": 16 00:19:30.994 } 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "method": "bdev_raid_set_options", 00:19:30.994 "params": { 00:19:30.994 "process_window_size_kb": 1024, 00:19:30.994 "process_max_bandwidth_mb_sec": 0 00:19:30.994 } 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "method": "bdev_iscsi_set_options", 00:19:30.994 "params": { 00:19:30.994 "timeout_sec": 30 00:19:30.994 } 00:19:30.994 }, 00:19:30.994 { 00:19:30.994 "method": "bdev_nvme_set_options", 00:19:30.994 "params": { 00:19:30.995 "action_on_timeout": "none", 00:19:30.995 "timeout_us": 0, 00:19:30.995 "timeout_admin_us": 0, 00:19:30.995 "keep_alive_timeout_ms": 10000, 00:19:30.995 "arbitration_burst": 0, 00:19:30.995 "low_priority_weight": 0, 00:19:30.995 "medium_priority_weight": 0, 00:19:30.995 "high_priority_weight": 0, 00:19:30.995 "nvme_adminq_poll_period_us": 10000, 00:19:30.995 "nvme_ioq_poll_period_us": 0, 00:19:30.995 "io_queue_requests": 512, 00:19:30.995 "delay_cmd_submit": true, 00:19:30.995 "transport_retry_count": 4, 00:19:30.995 "bdev_retry_count": 3, 00:19:30.995 "transport_ack_timeout": 0, 00:19:30.995 "ctrlr_loss_timeout_sec": 0, 00:19:30.995 "reconnect_delay_sec": 0, 00:19:30.995 "fast_io_fail_timeout_sec": 0, 00:19:30.995 "disable_auto_failback": false, 00:19:30.995 "generate_uuids": false, 00:19:30.995 "transport_tos": 0, 00:19:30.995 "nvme_error_stat": false, 00:19:30.995 "rdma_srq_size": 0, 00:19:30.995 "io_path_stat": false, 00:19:30.995 "allow_accel_sequence": false, 00:19:30.995 "rdma_max_cq_size": 0, 00:19:30.995 "rdma_cm_event_timeout_ms": 0, 00:19:30.995 "dhchap_digests": [ 00:19:30.995 "sha256", 00:19:30.995 "sha384", 00:19:30.995 "sha512" 00:19:30.995 ], 00:19:30.995 "dhchap_dhgroups": [ 00:19:30.995 "null", 00:19:30.995 "ffdhe2048", 00:19:30.995 "ffdhe3072", 00:19:30.995 "ffdhe4096", 00:19:30.995 "ffdhe6144", 00:19:30.995 "ffdhe8192" 00:19:30.995 ] 00:19:30.995 } 00:19:30.995 }, 00:19:30.995 { 00:19:30.995 "method": "bdev_nvme_attach_controller", 00:19:30.995 "params": { 00:19:30.995 "name": "nvme0", 00:19:30.995 "trtype": "TCP", 00:19:30.995 "adrfam": "IPv4", 00:19:30.995 "traddr": "10.0.0.2", 00:19:30.995 "trsvcid": "4420", 00:19:30.995 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:30.995 "prchk_reftag": false, 00:19:30.995 "prchk_guard": false, 00:19:30.995 "ctrlr_loss_timeout_sec": 0, 00:19:30.995 "reconnect_delay_sec": 0, 00:19:30.995 "fast_io_fail_timeout_sec": 0, 00:19:30.995 "psk": "key0", 00:19:30.995 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:30.995 "hdgst": false, 00:19:30.995 "ddgst": false, 00:19:30.995 "multipath": "multipath" 00:19:30.995 } 00:19:30.995 }, 00:19:30.995 { 00:19:30.995 "method": "bdev_nvme_set_hotplug", 00:19:30.995 "params": { 00:19:30.995 "period_us": 100000, 00:19:30.995 "enable": false 00:19:30.995 } 00:19:30.995 }, 00:19:30.995 { 00:19:30.995 "method": "bdev_enable_histogram", 00:19:30.995 "params": { 00:19:30.995 "name": "nvme0n1", 00:19:30.995 "enable": true 00:19:30.995 } 00:19:30.995 }, 00:19:30.995 { 00:19:30.995 "method": "bdev_wait_for_examine" 00:19:30.995 } 00:19:30.995 ] 00:19:30.995 }, 00:19:30.995 { 00:19:30.995 "subsystem": "nbd", 00:19:30.995 "config": [] 00:19:30.995 } 00:19:30.995 ] 00:19:30.995 }' 00:19:30.995 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # killprocess 3249890 00:19:30.995 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3249890 ']' 00:19:30.995 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3249890 00:19:30.995 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:30.995 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:30.995 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3249890 00:19:31.254 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:31.254 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:31.254 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3249890' 00:19:31.254 killing process with pid 3249890 00:19:31.254 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3249890 00:19:31.254 Received shutdown signal, test time was about 1.000000 seconds 00:19:31.254 00:19:31.254 Latency(us) 00:19:31.254 [2024-11-20T09:37:11.985Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:31.254 [2024-11-20T09:37:11.985Z] =================================================================================================================== 00:19:31.254 [2024-11-20T09:37:11.985Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:31.254 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3249890 00:19:31.254 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # killprocess 3249868 00:19:31.254 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3249868 ']' 00:19:31.254 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3249868 00:19:31.254 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:31.254 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.254 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3249868 00:19:31.254 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.254 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.254 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3249868' 00:19:31.254 killing process with pid 3249868 00:19:31.254 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3249868 00:19:31.254 10:37:11 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3249868 00:19:31.513 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # nvmfappstart -c /dev/fd/62 00:19:31.513 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:31.513 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:31.513 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # echo '{ 00:19:31.513 "subsystems": [ 00:19:31.513 { 00:19:31.513 "subsystem": "keyring", 00:19:31.513 "config": [ 00:19:31.513 { 00:19:31.513 "method": "keyring_file_add_key", 00:19:31.513 "params": { 00:19:31.513 "name": "key0", 00:19:31.513 "path": "/tmp/tmp.iRqze5yeVD" 00:19:31.513 } 00:19:31.513 } 00:19:31.513 ] 00:19:31.513 }, 00:19:31.513 { 00:19:31.513 "subsystem": "iobuf", 00:19:31.513 "config": [ 00:19:31.513 { 00:19:31.513 "method": "iobuf_set_options", 00:19:31.513 "params": { 00:19:31.513 "small_pool_count": 8192, 00:19:31.513 "large_pool_count": 1024, 00:19:31.513 "small_bufsize": 8192, 00:19:31.513 "large_bufsize": 135168, 00:19:31.513 "enable_numa": false 00:19:31.513 } 00:19:31.513 } 00:19:31.513 ] 00:19:31.513 }, 00:19:31.513 { 00:19:31.513 "subsystem": "sock", 00:19:31.513 "config": [ 00:19:31.513 { 00:19:31.513 "method": "sock_set_default_impl", 00:19:31.513 "params": { 00:19:31.513 "impl_name": "posix" 00:19:31.513 } 00:19:31.513 }, 00:19:31.513 { 00:19:31.513 "method": "sock_impl_set_options", 00:19:31.513 "params": { 00:19:31.513 "impl_name": "ssl", 00:19:31.513 "recv_buf_size": 4096, 00:19:31.513 "send_buf_size": 4096, 00:19:31.513 "enable_recv_pipe": true, 00:19:31.513 "enable_quickack": false, 00:19:31.513 "enable_placement_id": 0, 00:19:31.513 "enable_zerocopy_send_server": true, 00:19:31.513 "enable_zerocopy_send_client": false, 00:19:31.513 "zerocopy_threshold": 0, 00:19:31.513 "tls_version": 0, 00:19:31.513 "enable_ktls": false 00:19:31.513 } 00:19:31.513 }, 00:19:31.513 { 00:19:31.513 "method": "sock_impl_set_options", 00:19:31.513 "params": { 00:19:31.513 "impl_name": "posix", 00:19:31.513 "recv_buf_size": 2097152, 00:19:31.513 "send_buf_size": 2097152, 00:19:31.513 "enable_recv_pipe": true, 00:19:31.513 "enable_quickack": false, 00:19:31.513 "enable_placement_id": 0, 00:19:31.513 "enable_zerocopy_send_server": true, 00:19:31.513 "enable_zerocopy_send_client": false, 00:19:31.513 "zerocopy_threshold": 0, 00:19:31.513 "tls_version": 0, 00:19:31.513 "enable_ktls": false 00:19:31.513 } 00:19:31.513 } 00:19:31.513 ] 00:19:31.513 }, 00:19:31.513 { 00:19:31.513 "subsystem": "vmd", 00:19:31.513 "config": [] 00:19:31.513 }, 00:19:31.513 { 00:19:31.513 "subsystem": "accel", 00:19:31.513 "config": [ 00:19:31.513 { 00:19:31.513 "method": "accel_set_options", 00:19:31.513 "params": { 00:19:31.513 "small_cache_size": 128, 00:19:31.513 "large_cache_size": 16, 00:19:31.513 "task_count": 2048, 00:19:31.513 "sequence_count": 2048, 00:19:31.513 "buf_count": 2048 00:19:31.513 } 00:19:31.513 } 00:19:31.513 ] 00:19:31.513 }, 00:19:31.513 { 00:19:31.513 "subsystem": "bdev", 00:19:31.513 "config": [ 00:19:31.513 { 00:19:31.513 "method": "bdev_set_options", 00:19:31.513 "params": { 00:19:31.513 "bdev_io_pool_size": 65535, 00:19:31.513 "bdev_io_cache_size": 256, 00:19:31.513 "bdev_auto_examine": true, 00:19:31.513 "iobuf_small_cache_size": 128, 00:19:31.513 "iobuf_large_cache_size": 16 00:19:31.513 } 00:19:31.513 }, 00:19:31.513 { 00:19:31.513 "method": "bdev_raid_set_options", 00:19:31.513 "params": { 00:19:31.513 "process_window_size_kb": 1024, 00:19:31.513 "process_max_bandwidth_mb_sec": 0 00:19:31.513 } 00:19:31.513 }, 00:19:31.513 { 00:19:31.513 "method": "bdev_iscsi_set_options", 00:19:31.513 "params": { 00:19:31.513 "timeout_sec": 30 00:19:31.513 } 00:19:31.513 }, 00:19:31.513 { 00:19:31.513 "method": "bdev_nvme_set_options", 00:19:31.513 "params": { 00:19:31.513 "action_on_timeout": "none", 00:19:31.513 "timeout_us": 0, 00:19:31.513 "timeout_admin_us": 0, 00:19:31.513 "keep_alive_timeout_ms": 10000, 00:19:31.513 "arbitration_burst": 0, 00:19:31.513 "low_priority_weight": 0, 00:19:31.513 "medium_priority_weight": 0, 00:19:31.514 "high_priority_weight": 0, 00:19:31.514 "nvme_adminq_poll_period_us": 10000, 00:19:31.514 "nvme_ioq_poll_period_us": 0, 00:19:31.514 "io_queue_requests": 0, 00:19:31.514 "delay_cmd_submit": true, 00:19:31.514 "transport_retry_count": 4, 00:19:31.514 "bdev_retry_count": 3, 00:19:31.514 "transport_ack_timeout": 0, 00:19:31.514 "ctrlr_loss_timeout_sec": 0, 00:19:31.514 "reconnect_delay_sec": 0, 00:19:31.514 "fast_io_fail_timeout_sec": 0, 00:19:31.514 "disable_auto_failback": false, 00:19:31.514 "generate_uuids": false, 00:19:31.514 "transport_tos": 0, 00:19:31.514 "nvme_error_stat": false, 00:19:31.514 "rdma_srq_size": 0, 00:19:31.514 "io_path_stat": false, 00:19:31.514 "allow_accel_sequence": false, 00:19:31.514 "rdma_max_cq_size": 0, 00:19:31.514 "rdma_cm_event_timeout_ms": 0, 00:19:31.514 "dhchap_digests": [ 00:19:31.514 "sha256", 00:19:31.514 "sha384", 00:19:31.514 "sha512" 00:19:31.514 ], 00:19:31.514 "dhchap_dhgroups": [ 00:19:31.514 "null", 00:19:31.514 "ffdhe2048", 00:19:31.514 "ffdhe3072", 00:19:31.514 "ffdhe4096", 00:19:31.514 "ffdhe6144", 00:19:31.514 "ffdhe8192" 00:19:31.514 ] 00:19:31.514 } 00:19:31.514 }, 00:19:31.514 { 00:19:31.514 "method": "bdev_nvme_set_hotplug", 00:19:31.514 "params": { 00:19:31.514 "period_us": 100000, 00:19:31.514 "enable": false 00:19:31.514 } 00:19:31.514 }, 00:19:31.514 { 00:19:31.514 "method": "bdev_malloc_create", 00:19:31.514 "params": { 00:19:31.514 "name": "malloc0", 00:19:31.514 "num_blocks": 8192, 00:19:31.514 "block_size": 4096, 00:19:31.514 "physical_block_size": 4096, 00:19:31.514 "uuid": "a775ff4d-ced2-4b32-a49b-34603de06e2a", 00:19:31.514 "optimal_io_boundary": 0, 00:19:31.514 "md_size": 0, 00:19:31.514 "dif_type": 0, 00:19:31.514 "dif_is_head_of_md": false, 00:19:31.514 "dif_pi_format": 0 00:19:31.514 } 00:19:31.514 }, 00:19:31.514 { 00:19:31.514 "method": "bdev_wait_for_examine" 00:19:31.514 } 00:19:31.514 ] 00:19:31.514 }, 00:19:31.514 { 00:19:31.514 "subsystem": "nbd", 00:19:31.514 "config": [] 00:19:31.514 }, 00:19:31.514 { 00:19:31.514 "subsystem": "scheduler", 00:19:31.514 "config": [ 00:19:31.514 { 00:19:31.514 "method": "framework_set_scheduler", 00:19:31.514 "params": { 00:19:31.514 "name": "static" 00:19:31.514 } 00:19:31.514 } 00:19:31.514 ] 00:19:31.514 }, 00:19:31.514 { 00:19:31.514 "subsystem": "nvmf", 00:19:31.514 "config": [ 00:19:31.514 { 00:19:31.514 "method": "nvmf_set_config", 00:19:31.514 "params": { 00:19:31.514 "discovery_filter": "match_any", 00:19:31.514 "admin_cmd_passthru": { 00:19:31.514 "identify_ctrlr": false 00:19:31.514 }, 00:19:31.514 "dhchap_digests": [ 00:19:31.514 "sha256", 00:19:31.514 "sha384", 00:19:31.514 "sha512" 00:19:31.514 ], 00:19:31.514 "dhchap_dhgroups": [ 00:19:31.514 "null", 00:19:31.514 "ffdhe2048", 00:19:31.514 "ffdhe3072", 00:19:31.514 "ffdhe4096", 00:19:31.514 "ffdhe6144", 00:19:31.514 "ffdhe8192" 00:19:31.514 ] 00:19:31.514 } 00:19:31.514 }, 00:19:31.514 { 00:19:31.514 "method": "nvmf_set_max_subsystems", 00:19:31.514 "params": { 00:19:31.514 "max_subsystems": 1024 00:19:31.514 } 00:19:31.514 }, 00:19:31.514 { 00:19:31.514 "method": "nvmf_set_crdt", 00:19:31.514 "params": { 00:19:31.514 "crdt1": 0, 00:19:31.514 "crdt2": 0, 00:19:31.514 "crdt3": 0 00:19:31.514 } 00:19:31.514 }, 00:19:31.514 { 00:19:31.514 "method": "nvmf_create_transport", 00:19:31.514 "params": { 00:19:31.514 "trtype": "TCP", 00:19:31.514 "max_queue_depth": 128, 00:19:31.514 "max_io_qpairs_per_ctrlr": 127, 00:19:31.514 "in_capsule_data_size": 4096, 00:19:31.514 "max_io_size": 131072, 00:19:31.514 "io_unit_size": 131072, 00:19:31.514 "max_aq_depth": 128, 00:19:31.514 "num_shared_buffers": 511, 00:19:31.514 "buf_cache_size": 4294967295, 00:19:31.514 "dif_insert_or_strip": false, 00:19:31.514 "zcopy": false, 00:19:31.514 "c2h_success": false, 00:19:31.514 "sock_priority": 0, 00:19:31.514 "abort_timeout_sec": 1, 00:19:31.514 "ack_timeout": 0, 00:19:31.514 "data_wr_pool_size": 0 00:19:31.514 } 00:19:31.514 }, 00:19:31.514 { 00:19:31.514 "method": "nvmf_create_subsystem", 00:19:31.514 "params": { 00:19:31.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.514 "allow_any_host": false, 00:19:31.514 "serial_number": "00000000000000000000", 00:19:31.514 "model_number": "SPDK bdev Controller", 00:19:31.514 "max_namespaces": 32, 00:19:31.514 "min_cntlid": 1, 00:19:31.514 "max_cntlid": 65519, 00:19:31.514 "ana_reporting": false 00:19:31.514 } 00:19:31.514 }, 00:19:31.514 { 00:19:31.514 "method": "nvmf_subsystem_add_host", 00:19:31.514 "params": { 00:19:31.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.514 "host": "nqn.2016-06.io.spdk:host1", 00:19:31.514 "psk": "key0" 00:19:31.514 } 00:19:31.514 }, 00:19:31.514 { 00:19:31.514 "method": "nvmf_subsystem_add_ns", 00:19:31.514 "params": { 00:19:31.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.514 "namespace": { 00:19:31.514 "nsid": 1, 00:19:31.514 "bdev_name": "malloc0", 00:19:31.514 "nguid": "A775FF4DCED24B32A49B34603DE06E2A", 00:19:31.514 "uuid": "a775ff4d-ced2-4b32-a49b-34603de06e2a", 00:19:31.514 "no_auto_visible": false 00:19:31.514 } 00:19:31.514 } 00:19:31.514 }, 00:19:31.514 { 00:19:31.514 "method": "nvmf_subsystem_add_listener", 00:19:31.514 "params": { 00:19:31.514 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.514 "listen_address": { 00:19:31.514 "trtype": "TCP", 00:19:31.514 "adrfam": "IPv4", 00:19:31.514 "traddr": "10.0.0.2", 00:19:31.514 "trsvcid": "4420" 00:19:31.514 }, 00:19:31.514 "secure_channel": false, 00:19:31.514 "sock_impl": "ssl" 00:19:31.514 } 00:19:31.514 } 00:19:31.514 ] 00:19:31.514 } 00:19:31.514 ] 00:19:31.514 }' 00:19:31.514 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.514 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@328 -- # nvmfpid=3250370 00:19:31.514 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:31.514 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # waitforlisten 3250370 00:19:31.514 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3250370 ']' 00:19:31.514 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.514 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:31.514 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.514 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.514 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:31.514 10:37:12 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.514 [2024-11-20 10:37:12.189573] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:31.514 [2024-11-20 10:37:12.189622] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.773 [2024-11-20 10:37:12.253247] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:31.773 [2024-11-20 10:37:12.294399] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:31.773 [2024-11-20 10:37:12.294432] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:31.773 [2024-11-20 10:37:12.294450] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:31.773 [2024-11-20 10:37:12.294456] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:31.773 [2024-11-20 10:37:12.294477] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:31.773 [2024-11-20 10:37:12.295086] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:19:32.032 [2024-11-20 10:37:12.506303] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:32.032 [2024-11-20 10:37:12.538340] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:32.032 [2024-11-20 10:37:12.538553] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:32.599 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:32.599 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:32.599 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:32.599 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:32.599 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.599 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.599 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # bdevperf_pid=3250614 00:19:32.599 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # waitforlisten 3250614 /var/tmp/bdevperf.sock 00:19:32.599 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@835 -- # '[' -z 3250614 ']' 00:19:32.599 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:32.599 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:32.599 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:32.599 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:32.599 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:32.599 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # echo '{ 00:19:32.599 "subsystems": [ 00:19:32.599 { 00:19:32.599 "subsystem": "keyring", 00:19:32.599 "config": [ 00:19:32.599 { 00:19:32.599 "method": "keyring_file_add_key", 00:19:32.599 "params": { 00:19:32.599 "name": "key0", 00:19:32.599 "path": "/tmp/tmp.iRqze5yeVD" 00:19:32.599 } 00:19:32.599 } 00:19:32.599 ] 00:19:32.599 }, 00:19:32.599 { 00:19:32.599 "subsystem": "iobuf", 00:19:32.599 "config": [ 00:19:32.599 { 00:19:32.599 "method": "iobuf_set_options", 00:19:32.599 "params": { 00:19:32.599 "small_pool_count": 8192, 00:19:32.599 "large_pool_count": 1024, 00:19:32.599 "small_bufsize": 8192, 00:19:32.599 "large_bufsize": 135168, 00:19:32.599 "enable_numa": false 00:19:32.599 } 00:19:32.599 } 00:19:32.599 ] 00:19:32.599 }, 00:19:32.599 { 00:19:32.599 "subsystem": "sock", 00:19:32.599 "config": [ 00:19:32.599 { 00:19:32.599 "method": "sock_set_default_impl", 00:19:32.599 "params": { 00:19:32.599 "impl_name": "posix" 00:19:32.599 } 00:19:32.599 }, 00:19:32.599 { 00:19:32.599 "method": "sock_impl_set_options", 00:19:32.599 "params": { 00:19:32.599 "impl_name": "ssl", 00:19:32.599 "recv_buf_size": 4096, 00:19:32.599 "send_buf_size": 4096, 00:19:32.599 "enable_recv_pipe": true, 00:19:32.599 "enable_quickack": false, 00:19:32.599 "enable_placement_id": 0, 00:19:32.599 "enable_zerocopy_send_server": true, 00:19:32.599 "enable_zerocopy_send_client": false, 00:19:32.599 "zerocopy_threshold": 0, 00:19:32.599 "tls_version": 0, 00:19:32.599 "enable_ktls": false 00:19:32.599 } 00:19:32.599 }, 00:19:32.599 { 00:19:32.599 "method": "sock_impl_set_options", 00:19:32.599 "params": { 00:19:32.599 "impl_name": "posix", 00:19:32.599 "recv_buf_size": 2097152, 00:19:32.599 "send_buf_size": 2097152, 00:19:32.600 "enable_recv_pipe": true, 00:19:32.600 "enable_quickack": false, 00:19:32.600 "enable_placement_id": 0, 00:19:32.600 "enable_zerocopy_send_server": true, 00:19:32.600 "enable_zerocopy_send_client": false, 00:19:32.600 "zerocopy_threshold": 0, 00:19:32.600 "tls_version": 0, 00:19:32.600 "enable_ktls": false 00:19:32.600 } 00:19:32.600 } 00:19:32.600 ] 00:19:32.600 }, 00:19:32.600 { 00:19:32.600 "subsystem": "vmd", 00:19:32.600 "config": [] 00:19:32.600 }, 00:19:32.600 { 00:19:32.600 "subsystem": "accel", 00:19:32.600 "config": [ 00:19:32.600 { 00:19:32.600 "method": "accel_set_options", 00:19:32.600 "params": { 00:19:32.600 "small_cache_size": 128, 00:19:32.600 "large_cache_size": 16, 00:19:32.600 "task_count": 2048, 00:19:32.600 "sequence_count": 2048, 00:19:32.600 "buf_count": 2048 00:19:32.600 } 00:19:32.600 } 00:19:32.600 ] 00:19:32.600 }, 00:19:32.600 { 00:19:32.600 "subsystem": "bdev", 00:19:32.600 "config": [ 00:19:32.600 { 00:19:32.600 "method": "bdev_set_options", 00:19:32.600 "params": { 00:19:32.600 "bdev_io_pool_size": 65535, 00:19:32.600 "bdev_io_cache_size": 256, 00:19:32.600 "bdev_auto_examine": true, 00:19:32.600 "iobuf_small_cache_size": 128, 00:19:32.600 "iobuf_large_cache_size": 16 00:19:32.600 } 00:19:32.600 }, 00:19:32.600 { 00:19:32.600 "method": "bdev_raid_set_options", 00:19:32.600 "params": { 00:19:32.600 "process_window_size_kb": 1024, 00:19:32.600 "process_max_bandwidth_mb_sec": 0 00:19:32.600 } 00:19:32.600 }, 00:19:32.600 { 00:19:32.600 "method": "bdev_iscsi_set_options", 00:19:32.600 "params": { 00:19:32.600 "timeout_sec": 30 00:19:32.600 } 00:19:32.600 }, 00:19:32.600 { 00:19:32.600 "method": "bdev_nvme_set_options", 00:19:32.600 "params": { 00:19:32.600 "action_on_timeout": "none", 00:19:32.600 "timeout_us": 0, 00:19:32.600 "timeout_admin_us": 0, 00:19:32.600 "keep_alive_timeout_ms": 10000, 00:19:32.600 "arbitration_burst": 0, 00:19:32.600 "low_priority_weight": 0, 00:19:32.600 "medium_priority_weight": 0, 00:19:32.600 "high_priority_weight": 0, 00:19:32.600 "nvme_adminq_poll_period_us": 10000, 00:19:32.600 "nvme_ioq_poll_period_us": 0, 00:19:32.600 "io_queue_requests": 512, 00:19:32.600 "delay_cmd_submit": true, 00:19:32.600 "transport_retry_count": 4, 00:19:32.600 "bdev_retry_count": 3, 00:19:32.600 "transport_ack_timeout": 0, 00:19:32.600 "ctrlr_loss_timeout_sec": 0, 00:19:32.600 "reconnect_delay_sec": 0, 00:19:32.600 "fast_io_fail_timeout_sec": 0, 00:19:32.600 "disable_auto_failback": false, 00:19:32.600 "generate_uuids": false, 00:19:32.600 "transport_tos": 0, 00:19:32.600 "nvme_error_stat": false, 00:19:32.600 "rdma_srq_size": 0, 00:19:32.600 "io_path_stat": false, 00:19:32.600 "allow_accel_sequence": false, 00:19:32.600 "rdma_max_cq_size": 0, 00:19:32.600 "rdma_cm_event_timeout_ms": 0, 00:19:32.600 "dhchap_digests": [ 00:19:32.600 "sha256", 00:19:32.600 "sha384", 00:19:32.600 "sha512" 00:19:32.600 ], 00:19:32.600 "dhchap_dhgroups": [ 00:19:32.600 "null", 00:19:32.600 "ffdhe2048", 00:19:32.600 "ffdhe3072", 00:19:32.600 "ffdhe4096", 00:19:32.600 "ffdhe6144", 00:19:32.600 "ffdhe8192" 00:19:32.600 ] 00:19:32.600 } 00:19:32.600 }, 00:19:32.600 { 00:19:32.600 "method": "bdev_nvme_attach_controller", 00:19:32.600 "params": { 00:19:32.600 "name": "nvme0", 00:19:32.600 "trtype": "TCP", 00:19:32.600 "adrfam": "IPv4", 00:19:32.600 "traddr": "10.0.0.2", 00:19:32.600 "trsvcid": "4420", 00:19:32.600 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:32.600 "prchk_reftag": false, 00:19:32.600 "prchk_guard": false, 00:19:32.600 "ctrlr_loss_timeout_sec": 0, 00:19:32.600 "reconnect_delay_sec": 0, 00:19:32.600 "fast_io_fail_timeout_sec": 0, 00:19:32.600 "psk": "key0", 00:19:32.600 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:32.600 "hdgst": false, 00:19:32.600 "ddgst": false, 00:19:32.600 "multipath": "multipath" 00:19:32.600 } 00:19:32.600 }, 00:19:32.600 { 00:19:32.600 "method": "bdev_nvme_set_hotplug", 00:19:32.600 "params": { 00:19:32.600 "period_us": 100000, 00:19:32.600 "enable": false 00:19:32.600 } 00:19:32.600 }, 00:19:32.600 { 00:19:32.600 "method": "bdev_enable_histogram", 00:19:32.600 "params": { 00:19:32.600 "name": "nvme0n1", 00:19:32.600 "enable": true 00:19:32.600 } 00:19:32.600 }, 00:19:32.600 { 00:19:32.600 "method": "bdev_wait_for_examine" 00:19:32.600 } 00:19:32.600 ] 00:19:32.600 }, 00:19:32.600 { 00:19:32.600 "subsystem": "nbd", 00:19:32.600 "config": [] 00:19:32.600 } 00:19:32.600 ] 00:19:32.600 }' 00:19:32.600 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:32.600 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.600 [2024-11-20 10:37:13.123063] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:32.600 [2024-11-20 10:37:13.123112] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3250614 ] 00:19:32.600 [2024-11-20 10:37:13.197950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.600 [2024-11-20 10:37:13.238400] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.859 [2024-11-20 10:37:13.390909] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:33.425 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:33.425 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@868 -- # return 0 00:19:33.425 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:33.425 10:37:13 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # jq -r '.[].name' 00:19:33.685 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:33.685 10:37:14 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:33.685 Running I/O for 1 seconds... 00:19:34.618 5443.00 IOPS, 21.26 MiB/s 00:19:34.618 Latency(us) 00:19:34.618 [2024-11-20T09:37:15.350Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.619 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:34.619 Verification LBA range: start 0x0 length 0x2000 00:19:34.619 nvme0n1 : 1.01 5491.85 21.45 0.00 0.00 23134.11 4993.22 30208.98 00:19:34.619 [2024-11-20T09:37:15.350Z] =================================================================================================================== 00:19:34.619 [2024-11-20T09:37:15.350Z] Total : 5491.85 21.45 0.00 0.00 23134.11 4993.22 30208.98 00:19:34.619 { 00:19:34.619 "results": [ 00:19:34.619 { 00:19:34.619 "job": "nvme0n1", 00:19:34.619 "core_mask": "0x2", 00:19:34.619 "workload": "verify", 00:19:34.619 "status": "finished", 00:19:34.619 "verify_range": { 00:19:34.619 "start": 0, 00:19:34.619 "length": 8192 00:19:34.619 }, 00:19:34.619 "queue_depth": 128, 00:19:34.619 "io_size": 4096, 00:19:34.619 "runtime": 1.014413, 00:19:34.619 "iops": 5491.8460232666575, 00:19:34.619 "mibps": 21.45252352838538, 00:19:34.619 "io_failed": 0, 00:19:34.619 "io_timeout": 0, 00:19:34.619 "avg_latency_us": 23134.11108888718, 00:19:34.619 "min_latency_us": 4993.219047619048, 00:19:34.619 "max_latency_us": 30208.975238095238 00:19:34.619 } 00:19:34.619 ], 00:19:34.619 "core_count": 1 00:19:34.619 } 00:19:34.619 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # trap - SIGINT SIGTERM EXIT 00:19:34.619 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # cleanup 00:19:34.619 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:19:34.619 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # type=--id 00:19:34.619 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@813 -- # id=0 00:19:34.619 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:34.619 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:34.619 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:34.619 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:34.619 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:34.619 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:34.619 nvmf_trace.0 00:19:34.876 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@827 -- # return 0 00:19:34.876 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 3250614 00:19:34.876 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3250614 ']' 00:19:34.876 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3250614 00:19:34.876 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:34.876 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:34.876 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3250614 00:19:34.876 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:34.876 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:34.876 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3250614' 00:19:34.876 killing process with pid 3250614 00:19:34.876 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3250614 00:19:34.876 Received shutdown signal, test time was about 1.000000 seconds 00:19:34.876 00:19:34.876 Latency(us) 00:19:34.876 [2024-11-20T09:37:15.607Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:34.876 [2024-11-20T09:37:15.608Z] =================================================================================================================== 00:19:34.877 [2024-11-20T09:37:15.608Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:34.877 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3250614 00:19:34.877 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:19:34.877 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:34.877 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@99 -- # sync 00:19:34.877 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:34.877 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@102 -- # set +e 00:19:34.877 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:34.877 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:35.134 rmmod nvme_tcp 00:19:35.134 rmmod nvme_fabrics 00:19:35.134 rmmod nvme_keyring 00:19:35.134 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:35.134 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@106 -- # set -e 00:19:35.135 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@107 -- # return 0 00:19:35.135 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@336 -- # '[' -n 3250370 ']' 00:19:35.135 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@337 -- # killprocess 3250370 00:19:35.135 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # '[' -z 3250370 ']' 00:19:35.135 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # kill -0 3250370 00:19:35.135 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # uname 00:19:35.135 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:35.135 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3250370 00:19:35.135 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:35.135 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:35.135 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3250370' 00:19:35.135 killing process with pid 3250370 00:19:35.135 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@973 -- # kill 3250370 00:19:35.135 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@978 -- # wait 3250370 00:19:35.392 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:35.392 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # nvmf_fini 00:19:35.392 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@264 -- # local dev 00:19:35.392 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@267 -- # remove_target_ns 00:19:35.392 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:35.392 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:35.392 10:37:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@268 -- # delete_main_bridge 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@130 -- # return 0 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # _dev=0 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@41 -- # dev_map=() 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/setup.sh@284 -- # iptr 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@542 -- # iptables-save 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@542 -- # iptables-restore 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.h0hCpaiwiV /tmp/tmp.4RZQ7RZ1vO /tmp/tmp.iRqze5yeVD 00:19:37.296 00:19:37.296 real 1m19.949s 00:19:37.296 user 2m1.190s 00:19:37.296 sys 0m31.318s 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.296 ************************************ 00:19:37.296 END TEST nvmf_tls 00:19:37.296 ************************************ 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:37.296 10:37:17 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:37.556 ************************************ 00:19:37.556 START TEST nvmf_fips 00:19:37.556 ************************************ 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:19:37.556 * Looking for test storage... 00:19:37.556 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lcov --version 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=<' 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=2 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=1 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 2 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=2 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 2 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=2 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # return 0 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:37.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.556 --rc genhtml_branch_coverage=1 00:19:37.556 --rc genhtml_function_coverage=1 00:19:37.556 --rc genhtml_legend=1 00:19:37.556 --rc geninfo_all_blocks=1 00:19:37.556 --rc geninfo_unexecuted_blocks=1 00:19:37.556 00:19:37.556 ' 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:37.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.556 --rc genhtml_branch_coverage=1 00:19:37.556 --rc genhtml_function_coverage=1 00:19:37.556 --rc genhtml_legend=1 00:19:37.556 --rc geninfo_all_blocks=1 00:19:37.556 --rc geninfo_unexecuted_blocks=1 00:19:37.556 00:19:37.556 ' 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:37.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.556 --rc genhtml_branch_coverage=1 00:19:37.556 --rc genhtml_function_coverage=1 00:19:37.556 --rc genhtml_legend=1 00:19:37.556 --rc geninfo_all_blocks=1 00:19:37.556 --rc geninfo_unexecuted_blocks=1 00:19:37.556 00:19:37.556 ' 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:37.556 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:37.556 --rc genhtml_branch_coverage=1 00:19:37.556 --rc genhtml_function_coverage=1 00:19:37.556 --rc genhtml_legend=1 00:19:37.556 --rc geninfo_all_blocks=1 00:19:37.556 --rc geninfo_unexecuted_blocks=1 00:19:37.556 00:19:37.556 ' 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:37.556 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@15 -- # shopt -s extglob 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@50 -- # : 0 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:37.557 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@90 -- # check_openssl_version 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@84 -- # local target=3.0.0 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # openssl version 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # awk '{print $2}' 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@86 -- # ge 3.1.1 3.0.0 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@376 -- # cmp_versions 3.1.1 '>=' 3.0.0 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # IFS=.-: 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@336 -- # read -ra ver1 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # IFS=.-: 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # read -ra ver2 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # local 'op=>=' 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # ver1_l=3 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # ver2_l=3 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@344 -- # case "$op" in 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@348 -- # : 1 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 3 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=3 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 3 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=3 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 3 =~ ^[0-9]+$ ]] 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 3 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=3 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v++ )) 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # decimal 1 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=1 00:19:37.557 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:37.558 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 1 00:19:37.558 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # ver1[v]=1 00:19:37.558 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # decimal 0 00:19:37.558 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@353 -- # local d=0 00:19:37.558 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@354 -- # [[ 0 =~ ^[0-9]+$ ]] 00:19:37.558 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@355 -- # echo 0 00:19:37.558 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@366 -- # ver2[v]=0 00:19:37.558 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:37.558 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@367 -- # return 0 00:19:37.558 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # openssl info -modulesdir 00:19:37.817 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@96 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # openssl fipsinstall -help 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@102 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # export callback=build_openssl_config 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@105 -- # callback=build_openssl_config 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # build_openssl_config 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@38 -- # cat 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # [[ ! -t 0 ]] 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@59 -- # cat - 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # export OPENSSL_CONF=spdk_fips.conf 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@115 -- # OPENSSL_CONF=spdk_fips.conf 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # mapfile -t providers 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # openssl list -providers 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@117 -- # grep name 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # (( 2 != 2 )) 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: openssl base provider != *base* ]] 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@121 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # NOT openssl md5 /dev/fd/62 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@652 -- # local es=0 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@128 -- # : 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@654 -- # valid_exec_arg openssl md5 /dev/fd/62 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # local arg=openssl 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # type -t openssl 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # type -P openssl 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # arg=/usr/bin/openssl 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@646 -- # [[ -x /usr/bin/openssl ]] 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # openssl md5 /dev/fd/62 00:19:37.818 Error setting digest 00:19:37.818 40F20205257F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:341:Global default library context, Algorithm (MD5 : 95), Properties () 00:19:37.818 40F20205257F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:272: 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@655 -- # es=1 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmftestinit 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # remove_target_ns 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # xtrace_disable 00:19:37.818 10:37:18 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@131 -- # pci_devs=() 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@131 -- # local -a pci_devs 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@132 -- # pci_net_devs=() 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@133 -- # pci_drivers=() 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@133 -- # local -A pci_drivers 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@135 -- # net_devs=() 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@135 -- # local -ga net_devs 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@136 -- # e810=() 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@136 -- # local -ga e810 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@137 -- # x722=() 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@137 -- # local -ga x722 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@138 -- # mlx=() 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@138 -- # local -ga mlx 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:19:44.463 Found 0000:86:00.0 (0x8086 - 0x159b) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:19:44.463 Found 0000:86:00.1 (0x8086 - 0x159b) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:19:44.463 Found net devices under 0000:86:00.0: cvl_0_0 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # [[ up == up ]] 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:19:44.463 Found net devices under 0000:86:00.1: cvl_0_1 00:19:44.463 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@262 -- # is_hw=yes 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@257 -- # create_target_ns 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@27 -- # local -gA dev_map 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@28 -- # local -g _dev 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # ips=() 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772161 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:19:44.464 10.0.0.1 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@11 -- # local val=167772162 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:19:44.464 10.0.0.2 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@38 -- # ping_ips 1 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:19:44.464 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:19:44.465 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:19:44.465 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.441 ms 00:19:44.465 00:19:44.465 --- 10.0.0.1 ping statistics --- 00:19:44.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.465 rtt min/avg/max/mdev = 0.441/0.441/0.441/0.000 ms 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=target0 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:19:44.465 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:19:44.465 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:19:44.465 00:19:44.465 --- 10.0.0.2 ping statistics --- 00:19:44.465 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:19:44.465 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair++ )) 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # return 0 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=initiator0 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=initiator1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # return 1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev= 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@169 -- # return 0 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev target0 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=target0 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # get_net_dev target1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@107 -- # local dev=target1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@109 -- # return 1 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@168 -- # dev= 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@169 -- # return 0 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@132 -- # nvmfappstart -m 0x2 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@726 -- # xtrace_disable 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:44.465 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@328 -- # nvmfpid=3254662 00:19:44.466 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # waitforlisten 3254662 00:19:44.466 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:44.466 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3254662 ']' 00:19:44.466 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:44.466 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.466 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:44.466 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:44.466 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.466 10:37:24 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:44.466 [2024-11-20 10:37:24.520098] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:44.466 [2024-11-20 10:37:24.520149] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:44.466 [2024-11-20 10:37:24.597958] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:44.466 [2024-11-20 10:37:24.639488] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:44.466 [2024-11-20 10:37:24.639524] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:44.466 [2024-11-20 10:37:24.639531] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:44.466 [2024-11-20 10:37:24.639538] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:44.466 [2024-11-20 10:37:24.639543] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:44.466 [2024-11-20 10:37:24.640097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:19:44.724 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:44.724 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:44.724 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:19:44.724 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:44.724 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:44.724 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:44.724 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@134 -- # trap cleanup EXIT 00:19:44.724 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:44.724 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # mktemp -t spdk-psk.XXX 00:19:44.724 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # key_path=/tmp/spdk-psk.nRH 00:19:44.724 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:19:44.724 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@140 -- # chmod 0600 /tmp/spdk-psk.nRH 00:19:44.724 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@142 -- # setup_nvmf_tgt_conf /tmp/spdk-psk.nRH 00:19:44.724 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/tmp/spdk-psk.nRH 00:19:44.724 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:19:44.983 [2024-11-20 10:37:25.544587] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:44.983 [2024-11-20 10:37:25.560593] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:44.983 [2024-11-20 10:37:25.560792] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:44.983 malloc0 00:19:44.983 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:44.983 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # bdevperf_pid=3254912 00:19:44.983 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@149 -- # waitforlisten 3254912 /var/tmp/bdevperf.sock 00:19:44.983 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@146 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:44.983 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@835 -- # '[' -z 3254912 ']' 00:19:44.983 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:44.983 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:44.983 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:44.983 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:44.983 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:44.983 10:37:25 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:44.983 [2024-11-20 10:37:25.688383] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:19:44.983 [2024-11-20 10:37:25.688428] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3254912 ] 00:19:45.242 [2024-11-20 10:37:25.765001] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:45.242 [2024-11-20 10:37:25.804565] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:19:45.808 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:45.808 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@868 -- # return 0 00:19:45.808 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/spdk-psk.nRH 00:19:46.067 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@152 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk key0 00:19:46.326 [2024-11-20 10:37:26.840648] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:46.326 TLSTESTn1 00:19:46.326 10:37:26 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@156 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:46.326 Running I/O for 10 seconds... 00:19:48.635 5366.00 IOPS, 20.96 MiB/s [2024-11-20T09:37:30.301Z] 5444.00 IOPS, 21.27 MiB/s [2024-11-20T09:37:31.235Z] 5521.67 IOPS, 21.57 MiB/s [2024-11-20T09:37:32.170Z] 5433.25 IOPS, 21.22 MiB/s [2024-11-20T09:37:33.105Z] 5358.00 IOPS, 20.93 MiB/s [2024-11-20T09:37:34.041Z] 5280.83 IOPS, 20.63 MiB/s [2024-11-20T09:37:35.416Z] 5237.29 IOPS, 20.46 MiB/s [2024-11-20T09:37:36.352Z] 5197.38 IOPS, 20.30 MiB/s [2024-11-20T09:37:37.286Z] 5127.11 IOPS, 20.03 MiB/s [2024-11-20T09:37:37.286Z] 5063.20 IOPS, 19.78 MiB/s 00:19:56.555 Latency(us) 00:19:56.555 [2024-11-20T09:37:37.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.555 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:56.555 Verification LBA range: start 0x0 length 0x2000 00:19:56.555 TLSTESTn1 : 10.03 5062.32 19.77 0.00 0.00 25236.86 6834.47 30708.30 00:19:56.555 [2024-11-20T09:37:37.286Z] =================================================================================================================== 00:19:56.555 [2024-11-20T09:37:37.286Z] Total : 5062.32 19.77 0.00 0.00 25236.86 6834.47 30708.30 00:19:56.555 { 00:19:56.555 "results": [ 00:19:56.555 { 00:19:56.555 "job": "TLSTESTn1", 00:19:56.555 "core_mask": "0x4", 00:19:56.555 "workload": "verify", 00:19:56.555 "status": "finished", 00:19:56.555 "verify_range": { 00:19:56.555 "start": 0, 00:19:56.555 "length": 8192 00:19:56.555 }, 00:19:56.555 "queue_depth": 128, 00:19:56.555 "io_size": 4096, 00:19:56.555 "runtime": 10.026828, 00:19:56.555 "iops": 5062.318811093598, 00:19:56.555 "mibps": 19.774682855834367, 00:19:56.555 "io_failed": 0, 00:19:56.555 "io_timeout": 0, 00:19:56.555 "avg_latency_us": 25236.86447828628, 00:19:56.555 "min_latency_us": 6834.4685714285715, 00:19:56.555 "max_latency_us": 30708.297142857144 00:19:56.555 } 00:19:56.555 ], 00:19:56.555 "core_count": 1 00:19:56.555 } 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # type=--id 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@813 -- # id=0 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@824 -- # for n in $shm_files 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:19:56.555 nvmf_trace.0 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@827 -- # return 0 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 3254912 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3254912 ']' 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3254912 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3254912 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3254912' 00:19:56.555 killing process with pid 3254912 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3254912 00:19:56.555 Received shutdown signal, test time was about 10.000000 seconds 00:19:56.555 00:19:56.555 Latency(us) 00:19:56.555 [2024-11-20T09:37:37.286Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.555 [2024-11-20T09:37:37.286Z] =================================================================================================================== 00:19:56.555 [2024-11-20T09:37:37.286Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.555 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3254912 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # nvmfcleanup 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@99 -- # sync 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@102 -- # set +e 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@103 -- # for i in {1..20} 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:19:56.814 rmmod nvme_tcp 00:19:56.814 rmmod nvme_fabrics 00:19:56.814 rmmod nvme_keyring 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@106 -- # set -e 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@107 -- # return 0 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@336 -- # '[' -n 3254662 ']' 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@337 -- # killprocess 3254662 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # '[' -z 3254662 ']' 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # kill -0 3254662 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # uname 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3254662 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3254662' 00:19:56.814 killing process with pid 3254662 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@973 -- # kill 3254662 00:19:56.814 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@978 -- # wait 3254662 00:19:57.073 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:19:57.073 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # nvmf_fini 00:19:57.073 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@264 -- # local dev 00:19:57.073 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@267 -- # remove_target_ns 00:19:57.073 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:57.073 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:57.073 10:37:37 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@268 -- # delete_main_bridge 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@130 -- # return 0 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # _dev=0 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@41 -- # dev_map=() 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/setup.sh@284 -- # iptr 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@542 -- # iptables-save 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@542 -- # iptables-restore 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /tmp/spdk-psk.nRH 00:19:59.608 00:19:59.608 real 0m21.711s 00:19:59.608 user 0m22.511s 00:19:59.608 sys 0m10.512s 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:19:59.608 ************************************ 00:19:59.608 END TEST nvmf_fips 00:19:59.608 ************************************ 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@43 -- # run_test nvmf_control_msg_list /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:19:59.608 ************************************ 00:19:59.608 START TEST nvmf_control_msg_list 00:19:59.608 ************************************ 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/control_msg_list.sh --transport=tcp 00:19:59.608 * Looking for test storage... 00:19:59.608 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lcov --version 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # IFS=.-: 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@336 -- # read -ra ver1 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # IFS=.-: 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@337 -- # read -ra ver2 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@338 -- # local 'op=<' 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@340 -- # ver1_l=2 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@341 -- # ver2_l=1 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@344 -- # case "$op" in 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@345 -- # : 1 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # decimal 1 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=1 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 1 00:19:59.608 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@365 -- # ver1[v]=1 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # decimal 2 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@353 -- # local d=2 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@355 -- # echo 2 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@366 -- # ver2[v]=2 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@368 -- # return 0 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:19:59.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.609 --rc genhtml_branch_coverage=1 00:19:59.609 --rc genhtml_function_coverage=1 00:19:59.609 --rc genhtml_legend=1 00:19:59.609 --rc geninfo_all_blocks=1 00:19:59.609 --rc geninfo_unexecuted_blocks=1 00:19:59.609 00:19:59.609 ' 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:19:59.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.609 --rc genhtml_branch_coverage=1 00:19:59.609 --rc genhtml_function_coverage=1 00:19:59.609 --rc genhtml_legend=1 00:19:59.609 --rc geninfo_all_blocks=1 00:19:59.609 --rc geninfo_unexecuted_blocks=1 00:19:59.609 00:19:59.609 ' 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:19:59.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.609 --rc genhtml_branch_coverage=1 00:19:59.609 --rc genhtml_function_coverage=1 00:19:59.609 --rc genhtml_legend=1 00:19:59.609 --rc geninfo_all_blocks=1 00:19:59.609 --rc geninfo_unexecuted_blocks=1 00:19:59.609 00:19:59.609 ' 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:19:59.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:59.609 --rc genhtml_branch_coverage=1 00:19:59.609 --rc genhtml_function_coverage=1 00:19:59.609 --rc genhtml_legend=1 00:19:59.609 --rc geninfo_all_blocks=1 00:19:59.609 --rc geninfo_unexecuted_blocks=1 00:19:59.609 00:19:59.609 ' 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # uname -s 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:19:59.609 10:37:39 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@15 -- # shopt -s extglob 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@5 -- # export PATH 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@50 -- # : 0 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:19:59.609 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@54 -- # have_pci_nics=0 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@12 -- # nvmftestinit 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@296 -- # prepare_net_devs 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@258 -- # local -g is_hw=no 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@260 -- # remove_target_ns 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@125 -- # xtrace_disable 00:19:59.609 10:37:40 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:06.179 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:06.179 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@131 -- # pci_devs=() 00:20:06.179 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@131 -- # local -a pci_devs 00:20:06.179 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@132 -- # pci_net_devs=() 00:20:06.179 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:20:06.179 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@133 -- # pci_drivers=() 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@133 -- # local -A pci_drivers 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@135 -- # net_devs=() 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@135 -- # local -ga net_devs 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@136 -- # e810=() 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@136 -- # local -ga e810 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@137 -- # x722=() 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@137 -- # local -ga x722 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@138 -- # mlx=() 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@138 -- # local -ga mlx 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:06.180 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:06.180 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:06.180 Found net devices under 0000:86:00.0: cvl_0_0 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:06.180 Found net devices under 0000:86:00.1: cvl_0_1 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@262 -- # is_hw=yes 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@257 -- # create_target_ns 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@28 -- # local -g _dev 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # ips=() 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:20:06.180 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772161 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:20:06.181 10.0.0.1 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@11 -- # local val=167772162 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:20:06.181 10.0.0.2 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@38 -- # ping_ips 1 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:06.181 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:06.181 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.456 ms 00:20:06.181 00:20:06.181 --- 10.0.0.1 ping statistics --- 00:20:06.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.181 rtt min/avg/max/mdev = 0.456/0.456/0.456/0.000 ms 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=target0 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:06.181 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:20:06.182 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:06.182 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:20:06.182 00:20:06.182 --- 10.0.0.2 ping statistics --- 00:20:06.182 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:06.182 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair++ )) 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@270 -- # return 0 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=initiator1 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # return 1 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev= 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@169 -- # return 0 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=target0 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:06.182 10:37:45 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # get_net_dev target1 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@107 -- # local dev=target1 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@109 -- # return 1 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@168 -- # dev= 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@169 -- # return 0 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@13 -- # nvmfappstart 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@328 -- # nvmfpid=3260304 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@329 -- # waitforlisten 3260304 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@835 -- # '[' -z 3260304 ']' 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:06.182 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:06.182 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:06.182 [2024-11-20 10:37:46.114258] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:20:06.182 [2024-11-20 10:37:46.114306] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:06.182 [2024-11-20 10:37:46.190948] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:06.182 [2024-11-20 10:37:46.231638] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:06.182 [2024-11-20 10:37:46.231675] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:06.183 [2024-11-20 10:37:46.231683] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:06.183 [2024-11-20 10:37:46.231691] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:06.183 [2024-11-20 10:37:46.231697] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:06.183 [2024-11-20 10:37:46.232267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@868 -- # return 0 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@19 -- # rpc_cmd nvmf_create_transport '-t tcp -o' --in-capsule-data-size 768 --control-msg-num 1 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:06.183 [2024-11-20 10:37:46.366991] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@21 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:06.183 Malloc0 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:06.183 [2024-11-20 10:37:46.407274] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@27 -- # perf_pid1=3260326 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x2 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@29 -- # perf_pid2=3260327 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x4 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@31 -- # perf_pid3=3260328 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@33 -- # wait 3260326 00:20:06.183 10:37:46 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x8 -q 1 -o 4096 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:06.183 [2024-11-20 10:37:46.495959] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:06.183 [2024-11-20 10:37:46.496165] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:06.183 [2024-11-20 10:37:46.496339] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:07.118 Initializing NVMe Controllers 00:20:07.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:07.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 3 00:20:07.118 Initialization complete. Launching workers. 00:20:07.118 ======================================================== 00:20:07.118 Latency(us) 00:20:07.118 Device Information : IOPS MiB/s Average min max 00:20:07.118 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 3: 25.00 0.10 41010.36 40748.04 41900.12 00:20:07.118 ======================================================== 00:20:07.118 Total : 25.00 0.10 41010.36 40748.04 41900.12 00:20:07.118 00:20:07.118 Initializing NVMe Controllers 00:20:07.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:07.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 2 00:20:07.118 Initialization complete. Launching workers. 00:20:07.118 ======================================================== 00:20:07.118 Latency(us) 00:20:07.118 Device Information : IOPS MiB/s Average min max 00:20:07.118 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 2: 5991.00 23.40 166.58 127.78 497.72 00:20:07.118 ======================================================== 00:20:07.118 Total : 5991.00 23.40 166.58 127.78 497.72 00:20:07.118 00:20:07.118 Initializing NVMe Controllers 00:20:07.118 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:07.118 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 1 00:20:07.118 Initialization complete. Launching workers. 00:20:07.118 ======================================================== 00:20:07.118 Latency(us) 00:20:07.118 Device Information : IOPS MiB/s Average min max 00:20:07.118 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 1: 909.00 3.55 1099.12 130.13 41919.09 00:20:07.118 ======================================================== 00:20:07.119 Total : 909.00 3.55 1099.12 130.13 41919.09 00:20:07.119 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@34 -- # wait 3260327 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@35 -- # wait 3260328 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- target/control_msg_list.sh@38 -- # nvmftestfini 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@99 -- # sync 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@102 -- # set +e 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:07.119 rmmod nvme_tcp 00:20:07.119 rmmod nvme_fabrics 00:20:07.119 rmmod nvme_keyring 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@106 -- # set -e 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@107 -- # return 0 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@336 -- # '[' -n 3260304 ']' 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@337 -- # killprocess 3260304 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@954 -- # '[' -z 3260304 ']' 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@958 -- # kill -0 3260304 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # uname 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:07.119 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3260304 00:20:07.377 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:07.378 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:07.378 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3260304' 00:20:07.378 killing process with pid 3260304 00:20:07.378 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@973 -- # kill 3260304 00:20:07.378 10:37:47 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@978 -- # wait 3260304 00:20:07.378 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:07.378 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@342 -- # nvmf_fini 00:20:07.378 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@264 -- # local dev 00:20:07.378 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@267 -- # remove_target_ns 00:20:07.378 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:07.378 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:07.378 10:37:48 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@268 -- # delete_main_bridge 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@130 -- # return 0 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # _dev=0 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@41 -- # dev_map=() 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/setup.sh@284 -- # iptr 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@542 -- # iptables-save 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- nvmf/common.sh@542 -- # iptables-restore 00:20:09.911 00:20:09.911 real 0m10.288s 00:20:09.911 user 0m6.764s 00:20:09.911 sys 0m5.512s 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_control_msg_list -- common/autotest_common.sh@10 -- # set +x 00:20:09.911 ************************************ 00:20:09.911 END TEST nvmf_control_msg_list 00:20:09.911 ************************************ 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@44 -- # run_test nvmf_wait_for_buf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:09.911 ************************************ 00:20:09.911 START TEST nvmf_wait_for_buf 00:20:09.911 ************************************ 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/wait_for_buf.sh --transport=tcp 00:20:09.911 * Looking for test storage... 00:20:09.911 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lcov --version 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # IFS=.-: 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@336 -- # read -ra ver1 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # IFS=.-: 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@337 -- # read -ra ver2 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@338 -- # local 'op=<' 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@340 -- # ver1_l=2 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@341 -- # ver2_l=1 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@344 -- # case "$op" in 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@345 -- # : 1 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # decimal 1 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=1 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 1 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@365 -- # ver1[v]=1 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # decimal 2 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@353 -- # local d=2 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@355 -- # echo 2 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@366 -- # ver2[v]=2 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@368 -- # return 0 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:09.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.911 --rc genhtml_branch_coverage=1 00:20:09.911 --rc genhtml_function_coverage=1 00:20:09.911 --rc genhtml_legend=1 00:20:09.911 --rc geninfo_all_blocks=1 00:20:09.911 --rc geninfo_unexecuted_blocks=1 00:20:09.911 00:20:09.911 ' 00:20:09.911 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:09.911 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.911 --rc genhtml_branch_coverage=1 00:20:09.911 --rc genhtml_function_coverage=1 00:20:09.911 --rc genhtml_legend=1 00:20:09.911 --rc geninfo_all_blocks=1 00:20:09.911 --rc geninfo_unexecuted_blocks=1 00:20:09.911 00:20:09.911 ' 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:09.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.912 --rc genhtml_branch_coverage=1 00:20:09.912 --rc genhtml_function_coverage=1 00:20:09.912 --rc genhtml_legend=1 00:20:09.912 --rc geninfo_all_blocks=1 00:20:09.912 --rc geninfo_unexecuted_blocks=1 00:20:09.912 00:20:09.912 ' 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:09.912 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:09.912 --rc genhtml_branch_coverage=1 00:20:09.912 --rc genhtml_function_coverage=1 00:20:09.912 --rc genhtml_legend=1 00:20:09.912 --rc geninfo_all_blocks=1 00:20:09.912 --rc geninfo_unexecuted_blocks=1 00:20:09.912 00:20:09.912 ' 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # uname -s 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@15 -- # shopt -s extglob 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@5 -- # export PATH 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@50 -- # : 0 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:09.912 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@12 -- # nvmftestinit 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@260 -- # remove_target_ns 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@125 -- # xtrace_disable 00:20:09.912 10:37:50 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@131 -- # pci_devs=() 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@131 -- # local -a pci_devs 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@133 -- # pci_drivers=() 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@135 -- # net_devs=() 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@135 -- # local -ga net_devs 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@136 -- # e810=() 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@136 -- # local -ga e810 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@137 -- # x722=() 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@137 -- # local -ga x722 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@138 -- # mlx=() 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@138 -- # local -ga mlx 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:16.478 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:16.478 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:16.479 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:16.479 Found net devices under 0000:86:00.0: cvl_0_0 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:16.479 Found net devices under 0000:86:00.1: cvl_0_1 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@262 -- # is_hw=yes 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@257 -- # create_target_ns 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@28 -- # local -g _dev 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # ips=() 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772161 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:20:16.479 10.0.0.1 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@11 -- # local val=167772162 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:20:16.479 10.0.0.2 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:16.479 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@38 -- # ping_ips 1 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:16.480 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:16.480 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.361 ms 00:20:16.480 00:20:16.480 --- 10.0.0.1 ping statistics --- 00:20:16.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.480 rtt min/avg/max/mdev = 0.361/0.361/0.361/0.000 ms 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=target0 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:20:16.480 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:16.480 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:20:16.480 00:20:16.480 --- 10.0.0.2 ping statistics --- 00:20:16.480 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:16.480 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair++ )) 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@270 -- # return 0 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=initiator1 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # return 1 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev= 00:20:16.480 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@169 -- # return 0 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=target0 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@107 -- # local dev=target1 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@109 -- # return 1 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@168 -- # dev= 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@169 -- # return 0 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@13 -- # nvmfappstart --wait-for-rpc 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@328 -- # nvmfpid=3264111 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@329 -- # waitforlisten 3264111 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@835 -- # '[' -z 3264111 ']' 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:16.481 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:16.481 [2024-11-20 10:37:56.511001] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:20:16.481 [2024-11-20 10:37:56.511047] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:16.481 [2024-11-20 10:37:56.589444] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.481 [2024-11-20 10:37:56.630270] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:16.481 [2024-11-20 10:37:56.630308] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:16.481 [2024-11-20 10:37:56.630316] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:16.481 [2024-11-20 10:37:56.630322] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:16.481 [2024-11-20 10:37:56.630327] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:16.481 [2024-11-20 10:37:56.630878] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@868 -- # return 0 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@15 -- # subnqn=nqn.2024-07.io.spdk:cnode0 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@16 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@19 -- # rpc_cmd accel_set_options --small-cache-size 0 --large-cache-size 0 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@20 -- # rpc_cmd iobuf_set_options --small-pool-count 154 --small_bufsize=8192 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@21 -- # rpc_cmd framework_start_init 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@22 -- # rpc_cmd bdev_malloc_create -b Malloc0 32 512 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:16.481 Malloc0 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@23 -- # rpc_cmd nvmf_create_transport '-t tcp -o' -u 8192 -n 24 -b 24 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:16.481 [2024-11-20 10:37:56.820481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2024-07.io.spdk:cnode0 -a -s SPDK00000000000001 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2024-07.io.spdk:cnode0 Malloc0 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2024-07.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:16.481 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:16.481 [2024-11-20 10:37:56.848663] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:16.482 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:16.482 10:37:56 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 4 -o 131072 -w randread -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:20:16.482 [2024-11-20 10:37:56.938275] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:20:17.858 Initializing NVMe Controllers 00:20:17.858 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2024-07.io.spdk:cnode0 00:20:17.858 Associating TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 with lcore 0 00:20:17.858 Initialization complete. Launching workers. 00:20:17.858 ======================================================== 00:20:17.858 Latency(us) 00:20:17.858 Device Information : IOPS MiB/s Average min max 00:20:17.858 TCP (addr:10.0.0.2 subnqn:nqn.2024-07.io.spdk:cnode0) NSID 1 from core 0: 124.00 15.50 33538.61 29925.51 71057.76 00:20:17.858 ======================================================== 00:20:17.858 Total : 124.00 15.50 33538.61 29925.51 71057.76 00:20:17.858 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # rpc_cmd iobuf_get_stats 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # jq -r '.[] | select(.module == "nvmf_TCP") | .small_pool.retry' 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@32 -- # retry_count=1958 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@33 -- # [[ 1958 -eq 0 ]] 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- target/wait_for_buf.sh@38 -- # nvmftestfini 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@99 -- # sync 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@102 -- # set +e 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:17.858 rmmod nvme_tcp 00:20:17.858 rmmod nvme_fabrics 00:20:17.858 rmmod nvme_keyring 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@106 -- # set -e 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@107 -- # return 0 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@336 -- # '[' -n 3264111 ']' 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@337 -- # killprocess 3264111 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@954 -- # '[' -z 3264111 ']' 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@958 -- # kill -0 3264111 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # uname 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3264111 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3264111' 00:20:17.858 killing process with pid 3264111 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@973 -- # kill 3264111 00:20:17.858 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@978 -- # wait 3264111 00:20:18.117 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:18.117 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@342 -- # nvmf_fini 00:20:18.117 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@264 -- # local dev 00:20:18.117 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@267 -- # remove_target_ns 00:20:18.117 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:18.117 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:18.117 10:37:58 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@268 -- # delete_main_bridge 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@130 -- # return 0 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # _dev=0 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@41 -- # dev_map=() 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/setup.sh@284 -- # iptr 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@542 -- # iptables-save 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- nvmf/common.sh@542 -- # iptables-restore 00:20:20.021 00:20:20.021 real 0m10.520s 00:20:20.021 user 0m4.045s 00:20:20.021 sys 0m4.934s 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra.nvmf_wait_for_buf -- common/autotest_common.sh@10 -- # set +x 00:20:20.021 ************************************ 00:20:20.021 END TEST nvmf_wait_for_buf 00:20:20.021 ************************************ 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@47 -- # '[' 0 -eq 1 ']' 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # [[ phy == phy ]] 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # '[' tcp = tcp ']' 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # gather_supported_nvmf_pci_devs 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@125 -- # xtrace_disable 00:20:20.021 10:38:00 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@131 -- # pci_devs=() 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@131 -- # local -a pci_devs 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@132 -- # pci_net_devs=() 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@133 -- # pci_drivers=() 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@133 -- # local -A pci_drivers 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@135 -- # net_devs=() 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@135 -- # local -ga net_devs 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@136 -- # e810=() 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@136 -- # local -ga e810 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@137 -- # x722=() 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@137 -- # local -ga x722 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@138 -- # mlx=() 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@138 -- # local -ga mlx 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:26.588 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:26.589 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:26.589 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:26.589 Found net devices under 0000:86:00.0: cvl_0_0 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:26.589 Found net devices under 0000:86:00.1: cvl_0_1 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@57 -- # (( 2 > 0 )) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@58 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:26.589 ************************************ 00:20:26.589 START TEST nvmf_perf_adq 00:20:26.589 ************************************ 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:26.589 * Looking for test storage... 00:20:26.589 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lcov --version 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # IFS=.-: 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@336 -- # read -ra ver1 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # IFS=.-: 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@337 -- # read -ra ver2 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@338 -- # local 'op=<' 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@340 -- # ver1_l=2 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@341 -- # ver2_l=1 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@344 -- # case "$op" in 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@345 -- # : 1 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # decimal 1 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=1 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 1 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@365 -- # ver1[v]=1 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # decimal 2 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@353 -- # local d=2 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@355 -- # echo 2 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@366 -- # ver2[v]=2 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@368 -- # return 0 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:20:26.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.589 --rc genhtml_branch_coverage=1 00:20:26.589 --rc genhtml_function_coverage=1 00:20:26.589 --rc genhtml_legend=1 00:20:26.589 --rc geninfo_all_blocks=1 00:20:26.589 --rc geninfo_unexecuted_blocks=1 00:20:26.589 00:20:26.589 ' 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:20:26.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.589 --rc genhtml_branch_coverage=1 00:20:26.589 --rc genhtml_function_coverage=1 00:20:26.589 --rc genhtml_legend=1 00:20:26.589 --rc geninfo_all_blocks=1 00:20:26.589 --rc geninfo_unexecuted_blocks=1 00:20:26.589 00:20:26.589 ' 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:20:26.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.589 --rc genhtml_branch_coverage=1 00:20:26.589 --rc genhtml_function_coverage=1 00:20:26.589 --rc genhtml_legend=1 00:20:26.589 --rc geninfo_all_blocks=1 00:20:26.589 --rc geninfo_unexecuted_blocks=1 00:20:26.589 00:20:26.589 ' 00:20:26.589 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:20:26.589 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:26.589 --rc genhtml_branch_coverage=1 00:20:26.589 --rc genhtml_function_coverage=1 00:20:26.590 --rc genhtml_legend=1 00:20:26.590 --rc geninfo_all_blocks=1 00:20:26.590 --rc geninfo_unexecuted_blocks=1 00:20:26.590 00:20:26.590 ' 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@15 -- # shopt -s extglob 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@50 -- # : 0 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:20:26.590 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@54 -- # have_pci_nics=0 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:20:26.590 10:38:06 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:31.861 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:31.861 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:31.861 Found net devices under 0000:86:00.0: cvl_0_0 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:31.861 Found net devices under 0000:86:00.1: cvl_0_1 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # adq_reload_driver 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:31.861 10:38:12 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:32.822 10:38:13 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:34.727 10:38:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@76 -- # nvmftestinit 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # prepare_net_devs 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # local -g is_hw=no 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # remove_target_ns 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:40.000 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:40.000 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:20:40.000 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:40.001 Found net devices under 0000:86:00.0: cvl_0_0 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:40.001 Found net devices under 0000:86:00.1: cvl_0_1 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # is_hw=yes 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@257 -- # create_target_ns 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@27 -- # local -gA dev_map 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@28 -- # local -g _dev 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # ips=() 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772161 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:20:40.001 10.0.0.1 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772162 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:20:40.001 10.0.0.2 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:20:40.001 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:20:40.002 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@38 -- # ping_ips 1 00:20:40.002 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:20:40.002 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:20:40.002 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:40.002 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:20:40.002 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:20:40.002 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:40.002 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:40.002 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:40.002 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:40.002 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:40.002 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:40.002 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:20:40.002 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:20:40.002 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:20:40.002 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:40.002 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:40.002 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:40.260 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:40.260 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:40.260 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:20:40.260 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:20:40.260 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:40.260 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:40.260 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:20:40.261 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:40.261 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.457 ms 00:20:40.261 00:20:40.261 --- 10.0.0.1 ping statistics --- 00:20:40.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.261 rtt min/avg/max/mdev = 0.457/0.457/0.457/0.000 ms 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target0 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:20:40.261 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:40.261 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.212 ms 00:20:40.261 00:20:40.261 --- 10.0.0.2 ping statistics --- 00:20:40.261 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:40.261 rtt min/avg/max/mdev = 0.212/0.212/0.212/0.000 ms 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair++ )) 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # return 0 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator0 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator1 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # return 1 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev= 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@169 -- # return 0 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target0 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target0 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target1 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target1 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # return 1 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev= 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@169 -- # return 0 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:40.261 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:20:40.262 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:20:40.262 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.262 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # nvmfpid=3272492 00:20:40.262 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # waitforlisten 3272492 00:20:40.262 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:40.262 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3272492 ']' 00:20:40.262 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:40.262 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:40.262 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:40.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:40.262 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:40.262 10:38:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:40.262 [2024-11-20 10:38:20.910535] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:20:40.262 [2024-11-20 10:38:20.910579] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:40.520 [2024-11-20 10:38:20.989915] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:40.520 [2024-11-20 10:38:21.032912] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:40.520 [2024-11-20 10:38:21.032947] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:40.520 [2024-11-20 10:38:21.032954] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:40.520 [2024-11-20 10:38:21.032962] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:40.520 [2024-11-20 10:38:21.032967] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:40.520 [2024-11-20 10:38:21.034550] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:20:40.520 [2024-11-20 10:38:21.034579] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:20:40.520 [2024-11-20 10:38:21.034688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:20:40.520 [2024-11-20 10:38:21.034689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:20:41.085 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:41.085 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:20:41.085 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:20:41.085 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:20:41.085 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:41.085 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:41.085 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # adq_configure_nvmf_target 0 00:20:41.085 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:41.085 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:41.085 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.085 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:41.085 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:41.344 [2024-11-20 10:38:21.913630] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:41.344 Malloc1 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:41.344 [2024-11-20 10:38:21.975282] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@82 -- # perfpid=3272727 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # sleep 2 00:20:41.344 10:38:21 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:43.872 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # rpc_cmd nvmf_get_stats 00:20:43.872 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:20:43.872 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.872 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:20:43.872 10:38:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@85 -- # nvmf_stats='{ 00:20:43.872 "tick_rate": 2100000000, 00:20:43.872 "poll_groups": [ 00:20:43.872 { 00:20:43.872 "name": "nvmf_tgt_poll_group_000", 00:20:43.872 "admin_qpairs": 1, 00:20:43.872 "io_qpairs": 1, 00:20:43.872 "current_admin_qpairs": 1, 00:20:43.872 "current_io_qpairs": 1, 00:20:43.872 "pending_bdev_io": 0, 00:20:43.872 "completed_nvme_io": 20361, 00:20:43.872 "transports": [ 00:20:43.872 { 00:20:43.872 "trtype": "TCP" 00:20:43.872 } 00:20:43.872 ] 00:20:43.872 }, 00:20:43.872 { 00:20:43.872 "name": "nvmf_tgt_poll_group_001", 00:20:43.872 "admin_qpairs": 0, 00:20:43.872 "io_qpairs": 1, 00:20:43.872 "current_admin_qpairs": 0, 00:20:43.872 "current_io_qpairs": 1, 00:20:43.872 "pending_bdev_io": 0, 00:20:43.872 "completed_nvme_io": 20585, 00:20:43.872 "transports": [ 00:20:43.872 { 00:20:43.872 "trtype": "TCP" 00:20:43.872 } 00:20:43.872 ] 00:20:43.872 }, 00:20:43.872 { 00:20:43.872 "name": "nvmf_tgt_poll_group_002", 00:20:43.872 "admin_qpairs": 0, 00:20:43.872 "io_qpairs": 1, 00:20:43.872 "current_admin_qpairs": 0, 00:20:43.872 "current_io_qpairs": 1, 00:20:43.872 "pending_bdev_io": 0, 00:20:43.872 "completed_nvme_io": 20028, 00:20:43.872 "transports": [ 00:20:43.872 { 00:20:43.872 "trtype": "TCP" 00:20:43.872 } 00:20:43.872 ] 00:20:43.872 }, 00:20:43.872 { 00:20:43.872 "name": "nvmf_tgt_poll_group_003", 00:20:43.872 "admin_qpairs": 0, 00:20:43.872 "io_qpairs": 1, 00:20:43.872 "current_admin_qpairs": 0, 00:20:43.872 "current_io_qpairs": 1, 00:20:43.872 "pending_bdev_io": 0, 00:20:43.872 "completed_nvme_io": 20271, 00:20:43.872 "transports": [ 00:20:43.872 { 00:20:43.872 "trtype": "TCP" 00:20:43.872 } 00:20:43.872 ] 00:20:43.872 } 00:20:43.872 ] 00:20:43.872 }' 00:20:43.872 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # wc -l 00:20:43.872 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:43.872 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # count=4 00:20:43.872 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@87 -- # [[ 4 -ne 4 ]] 00:20:43.872 10:38:24 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # wait 3272727 00:20:52.155 Initializing NVMe Controllers 00:20:52.155 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:52.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:52.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:52.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:52.155 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:52.155 Initialization complete. Launching workers. 00:20:52.155 ======================================================== 00:20:52.156 Latency(us) 00:20:52.156 Device Information : IOPS MiB/s Average min max 00:20:52.156 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10826.20 42.29 5911.56 1863.12 10090.37 00:20:52.156 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10992.70 42.94 5822.07 1940.95 9958.96 00:20:52.156 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10812.90 42.24 5918.21 2115.48 9685.65 00:20:52.156 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10896.50 42.56 5873.70 1196.87 10571.30 00:20:52.156 ======================================================== 00:20:52.156 Total : 43528.29 170.03 5881.14 1196.87 10571.30 00:20:52.156 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # nvmftestfini 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # nvmfcleanup 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@99 -- # sync 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@102 -- # set +e 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@103 -- # for i in {1..20} 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:20:52.156 rmmod nvme_tcp 00:20:52.156 rmmod nvme_fabrics 00:20:52.156 rmmod nvme_keyring 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@106 -- # set -e 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@107 -- # return 0 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # '[' -n 3272492 ']' 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@337 -- # killprocess 3272492 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3272492 ']' 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3272492 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3272492 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3272492' 00:20:52.156 killing process with pid 3272492 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3272492 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3272492 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # nvmf_fini 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@264 -- # local dev 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@267 -- # remove_target_ns 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:20:52.156 10:38:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@268 -- # delete_main_bridge 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@130 -- # return 0 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # _dev=0 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # dev_map=() 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@284 -- # iptr 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # iptables-save 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # iptables-restore 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@94 -- # adq_reload_driver 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@58 -- # modprobe -a sch_mqprio 00:20:54.061 10:38:34 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@61 -- # rmmod ice 00:20:54.997 10:38:35 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@62 -- # modprobe ice 00:20:57.531 10:38:37 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@63 -- # sleep 5 00:21:02.800 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # nvmftestinit 00:21:02.800 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:02.800 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:02.800 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:02.800 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:02.800 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # remove_target_ns 00:21:02.800 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:02.800 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:02.800 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:02.800 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:21:02.800 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:21:02.800 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # xtrace_disable 00:21:02.800 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.800 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:02.800 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # pci_devs=() 00:21:02.800 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@131 -- # local -a pci_devs 00:21:02.800 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # pci_net_devs=() 00:21:02.800 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # pci_drivers=() 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@133 -- # local -A pci_drivers 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # net_devs=() 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@135 -- # local -ga net_devs 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # e810=() 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@136 -- # local -ga e810 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # x722=() 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@137 -- # local -ga x722 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # mlx=() 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@138 -- # local -ga mlx 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:02.801 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:02.801 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:02.801 Found net devices under 0000:86:00.0: cvl_0_0 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:02.801 Found net devices under 0000:86:00.1: cvl_0_1 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@262 -- # is_hw=yes 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@257 -- # create_target_ns 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@28 -- # local -g _dev 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # ips=() 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:21:02.801 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772161 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:21:02.802 10.0.0.1 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@11 -- # local val=167772162 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:21:02.802 10.0.0.2 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@38 -- # ping_ips 1 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:02.802 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:02.802 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.496 ms 00:21:02.802 00:21:02.802 --- 10.0.0.1 ping statistics --- 00:21:02.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.802 rtt min/avg/max/mdev = 0.496/0.496/0.496/0.000 ms 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target0 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:21:02.802 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:02.802 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.205 ms 00:21:02.802 00:21:02.802 --- 10.0.0.2 ping statistics --- 00:21:02.802 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:02.802 rtt min/avg/max/mdev = 0.205/0.205/0.205/0.000 ms 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # return 0 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:02.802 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:02.803 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:21:02.803 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:21:02.803 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:21:02.803 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:21:02.803 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:02.803 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:02.803 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:02.803 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:02.803 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:02.803 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:02.803 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:02.803 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:02.803 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:02.803 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:02.803 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:02.803 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:02.803 10:38:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=initiator1 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # return 1 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev= 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@169 -- # return 0 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target0 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@107 -- # local dev=target1 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@109 -- # return 1 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@168 -- # dev= 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@169 -- # return 0 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@98 -- # adq_configure_driver 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec nvmf_ns_spdk ethtool --offload cvl_0_1 hw-tc-offload on 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec nvmf_ns_spdk ethtool --set-priv-flags cvl_0_1 channel-pkt-inspect-optimize off 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:02.803 net.core.busy_poll = 1 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:02.803 net.core.busy_read = 1 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_1 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_1 ingress 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec nvmf_ns_spdk /usr/sbin/tc filter add dev cvl_0_1 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_1 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@328 -- # nvmfpid=3276553 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # waitforlisten 3276553 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@835 -- # '[' -z 3276553 ']' 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:02.803 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:02.803 10:38:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:02.803 [2024-11-20 10:38:43.379522] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:21:02.803 [2024-11-20 10:38:43.379574] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:02.803 [2024-11-20 10:38:43.456691] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:02.803 [2024-11-20 10:38:43.499991] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:02.803 [2024-11-20 10:38:43.500025] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:02.803 [2024-11-20 10:38:43.500031] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:02.803 [2024-11-20 10:38:43.500037] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:02.804 [2024-11-20 10:38:43.500043] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:02.804 [2024-11-20 10:38:43.501477] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:02.804 [2024-11-20 10:38:43.501582] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:02.804 [2024-11-20 10:38:43.501686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:02.804 [2024-11-20 10:38:43.501686] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@868 -- # return 0 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # adq_configure_nvmf_target 1 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.738 [2024-11-20 10:38:44.376460] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.738 Malloc1 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.738 [2024-11-20 10:38:44.442993] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@104 -- # perfpid=3276806 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@105 -- # sleep 2 00:21:03.738 10:38:44 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:06.264 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # rpc_cmd nvmf_get_stats 00:21:06.264 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:06.264 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:06.264 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:06.264 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmf_stats='{ 00:21:06.264 "tick_rate": 2100000000, 00:21:06.264 "poll_groups": [ 00:21:06.264 { 00:21:06.264 "name": "nvmf_tgt_poll_group_000", 00:21:06.264 "admin_qpairs": 1, 00:21:06.264 "io_qpairs": 1, 00:21:06.264 "current_admin_qpairs": 1, 00:21:06.264 "current_io_qpairs": 1, 00:21:06.264 "pending_bdev_io": 0, 00:21:06.264 "completed_nvme_io": 27357, 00:21:06.264 "transports": [ 00:21:06.264 { 00:21:06.264 "trtype": "TCP" 00:21:06.264 } 00:21:06.264 ] 00:21:06.264 }, 00:21:06.264 { 00:21:06.264 "name": "nvmf_tgt_poll_group_001", 00:21:06.264 "admin_qpairs": 0, 00:21:06.264 "io_qpairs": 3, 00:21:06.264 "current_admin_qpairs": 0, 00:21:06.264 "current_io_qpairs": 3, 00:21:06.264 "pending_bdev_io": 0, 00:21:06.264 "completed_nvme_io": 31074, 00:21:06.264 "transports": [ 00:21:06.264 { 00:21:06.264 "trtype": "TCP" 00:21:06.264 } 00:21:06.264 ] 00:21:06.264 }, 00:21:06.264 { 00:21:06.264 "name": "nvmf_tgt_poll_group_002", 00:21:06.264 "admin_qpairs": 0, 00:21:06.264 "io_qpairs": 0, 00:21:06.264 "current_admin_qpairs": 0, 00:21:06.264 "current_io_qpairs": 0, 00:21:06.264 "pending_bdev_io": 0, 00:21:06.264 "completed_nvme_io": 0, 00:21:06.264 "transports": [ 00:21:06.264 { 00:21:06.264 "trtype": "TCP" 00:21:06.264 } 00:21:06.264 ] 00:21:06.264 }, 00:21:06.264 { 00:21:06.264 "name": "nvmf_tgt_poll_group_003", 00:21:06.264 "admin_qpairs": 0, 00:21:06.264 "io_qpairs": 0, 00:21:06.264 "current_admin_qpairs": 0, 00:21:06.264 "current_io_qpairs": 0, 00:21:06.264 "pending_bdev_io": 0, 00:21:06.264 "completed_nvme_io": 0, 00:21:06.264 "transports": [ 00:21:06.264 { 00:21:06.264 "trtype": "TCP" 00:21:06.264 } 00:21:06.264 ] 00:21:06.264 } 00:21:06.264 ] 00:21:06.264 }' 00:21:06.264 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:06.264 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # wc -l 00:21:06.264 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@108 -- # count=2 00:21:06.264 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # [[ 2 -lt 2 ]] 00:21:06.264 10:38:46 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@114 -- # wait 3276806 00:21:14.369 Initializing NVMe Controllers 00:21:14.369 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:14.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:14.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:14.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:14.369 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:14.369 Initialization complete. Launching workers. 00:21:14.369 ======================================================== 00:21:14.369 Latency(us) 00:21:14.369 Device Information : IOPS MiB/s Average min max 00:21:14.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 5311.89 20.75 12049.54 1552.06 58891.93 00:21:14.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 5116.19 19.99 12509.89 1957.47 57987.24 00:21:14.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 5168.79 20.19 12381.70 1466.74 57372.16 00:21:14.369 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 15349.38 59.96 4169.19 1619.34 44956.42 00:21:14.369 ======================================================== 00:21:14.369 Total : 30946.25 120.88 8272.46 1466.74 58891.93 00:21:14.369 00:21:14.369 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@115 -- # nvmftestfini 00:21:14.369 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:14.369 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@99 -- # sync 00:21:14.369 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:14.369 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@102 -- # set +e 00:21:14.369 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:14.369 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:14.369 rmmod nvme_tcp 00:21:14.369 rmmod nvme_fabrics 00:21:14.369 rmmod nvme_keyring 00:21:14.369 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:14.369 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@106 -- # set -e 00:21:14.369 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@107 -- # return 0 00:21:14.369 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@336 -- # '[' -n 3276553 ']' 00:21:14.369 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@337 -- # killprocess 3276553 00:21:14.369 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # '[' -z 3276553 ']' 00:21:14.369 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # kill -0 3276553 00:21:14.369 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # uname 00:21:14.369 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:14.369 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3276553 00:21:14.369 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:14.369 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:14.370 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3276553' 00:21:14.370 killing process with pid 3276553 00:21:14.370 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@973 -- # kill 3276553 00:21:14.370 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@978 -- # wait 3276553 00:21:14.370 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:14.370 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # nvmf_fini 00:21:14.370 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@264 -- # local dev 00:21:14.370 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@267 -- # remove_target_ns 00:21:14.370 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:14.370 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:14.370 10:38:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@268 -- # delete_main_bridge 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@130 -- # return 0 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # _dev=0 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@41 -- # dev_map=() 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/setup.sh@284 -- # iptr 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # iptables-save 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@542 -- # iptables-restore 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@117 -- # trap - SIGINT SIGTERM EXIT 00:21:16.272 00:21:16.272 real 0m50.546s 00:21:16.272 user 2m49.063s 00:21:16.272 sys 0m10.698s 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:16.272 ************************************ 00:21:16.272 END TEST nvmf_perf_adq 00:21:16.272 ************************************ 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@65 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.272 10:38:56 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:16.531 ************************************ 00:21:16.531 START TEST nvmf_shutdown 00:21:16.531 ************************************ 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:16.531 * Looking for test storage... 00:21:16.531 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lcov --version 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@345 -- # : 1 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # decimal 1 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=1 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 1 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # decimal 2 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@353 -- # local d=2 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@355 -- # echo 2 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@368 -- # return 0 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:16.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.531 --rc genhtml_branch_coverage=1 00:21:16.531 --rc genhtml_function_coverage=1 00:21:16.531 --rc genhtml_legend=1 00:21:16.531 --rc geninfo_all_blocks=1 00:21:16.531 --rc geninfo_unexecuted_blocks=1 00:21:16.531 00:21:16.531 ' 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:16.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.531 --rc genhtml_branch_coverage=1 00:21:16.531 --rc genhtml_function_coverage=1 00:21:16.531 --rc genhtml_legend=1 00:21:16.531 --rc geninfo_all_blocks=1 00:21:16.531 --rc geninfo_unexecuted_blocks=1 00:21:16.531 00:21:16.531 ' 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:16.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.531 --rc genhtml_branch_coverage=1 00:21:16.531 --rc genhtml_function_coverage=1 00:21:16.531 --rc genhtml_legend=1 00:21:16.531 --rc geninfo_all_blocks=1 00:21:16.531 --rc geninfo_unexecuted_blocks=1 00:21:16.531 00:21:16.531 ' 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:16.531 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:16.531 --rc genhtml_branch_coverage=1 00:21:16.531 --rc genhtml_function_coverage=1 00:21:16.531 --rc genhtml_legend=1 00:21:16.531 --rc geninfo_all_blocks=1 00:21:16.531 --rc geninfo_unexecuted_blocks=1 00:21:16.531 00:21:16.531 ' 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:16.531 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@15 -- # shopt -s extglob 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@50 -- # : 0 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:16.532 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BDEV_SIZE=64 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@162 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:16.532 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:16.790 ************************************ 00:21:16.790 START TEST nvmf_shutdown_tc1 00:21:16.790 ************************************ 00:21:16.790 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc1 00:21:16.790 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@75 -- # starttarget 00:21:16.790 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:16.790 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:16.790 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:16.790 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:16.790 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:16.790 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # remove_target_ns 00:21:16.790 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:16.790 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:16.790 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:16.790 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:21:16.790 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:21:16.790 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # xtrace_disable 00:21:16.790 10:38:57 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@131 -- # pci_devs=() 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@131 -- # local -a pci_devs 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@133 -- # pci_drivers=() 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@135 -- # net_devs=() 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@135 -- # local -ga net_devs 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@136 -- # e810=() 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@136 -- # local -ga e810 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@137 -- # x722=() 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@137 -- # local -ga x722 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@138 -- # mlx=() 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@138 -- # local -ga mlx 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:23.358 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:23.359 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:23.359 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:23.359 Found net devices under 0000:86:00.0: cvl_0_0 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:23.359 Found net devices under 0000:86:00.1: cvl_0_1 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@262 -- # is_hw=yes 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@257 -- # create_target_ns 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@28 -- # local -g _dev 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@44 -- # ips=() 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:21:23.359 10:39:02 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:21:23.359 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:21:23.359 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@11 -- # local val=167772161 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:21:23.360 10.0.0.1 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@11 -- # local val=167772162 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:21:23.360 10.0.0.2 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@38 -- # ping_ips 1 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:23.360 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:23.360 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.314 ms 00:21:23.360 00:21:23.360 --- 10.0.0.1 ping statistics --- 00:21:23.360 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.360 rtt min/avg/max/mdev = 0.314/0.314/0.314/0.000 ms 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=target0 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:23.360 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:21:23.361 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:23.361 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:21:23.361 00:21:23.361 --- 10.0.0.2 ping statistics --- 00:21:23.361 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:23.361 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # return 0 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=initiator1 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # return 1 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev= 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@169 -- # return 0 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=target0 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@107 -- # local dev=target1 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@109 -- # return 1 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@168 -- # dev= 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@169 -- # return 0 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@328 -- # nvmfpid=3282178 00:21:23.361 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:23.362 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # waitforlisten 3282178 00:21:23.362 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3282178 ']' 00:21:23.362 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:23.362 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:23.362 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:23.362 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:23.362 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:23.362 10:39:03 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:23.362 [2024-11-20 10:39:03.442262] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:21:23.362 [2024-11-20 10:39:03.442307] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:23.362 [2024-11-20 10:39:03.521018] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:23.362 [2024-11-20 10:39:03.564136] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:23.362 [2024-11-20 10:39:03.564172] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:23.362 [2024-11-20 10:39:03.564183] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:23.362 [2024-11-20 10:39:03.564189] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:23.362 [2024-11-20 10:39:03.564194] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:23.362 [2024-11-20 10:39:03.565692] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:23.362 [2024-11-20 10:39:03.565800] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:23.362 [2024-11-20 10:39:03.565907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:23.362 [2024-11-20 10:39:03.565907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:23.648 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:23.648 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:23.649 [2024-11-20 10:39:04.315823] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@29 -- # cat 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:23.649 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:23.907 Malloc1 00:21:23.907 [2024-11-20 10:39:04.424829] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:23.907 Malloc2 00:21:23.907 Malloc3 00:21:23.907 Malloc4 00:21:23.907 Malloc5 00:21:23.907 Malloc6 00:21:24.166 Malloc7 00:21:24.166 Malloc8 00:21:24.166 Malloc9 00:21:24.166 Malloc10 00:21:24.166 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:24.166 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # perfpid=3282463 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # waitforlisten 3282463 /var/tmp/bdevperf.sock 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@835 -- # '[' -z 3282463 ']' 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:24.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # config=() 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # local subsystem config 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:24.167 { 00:21:24.167 "params": { 00:21:24.167 "name": "Nvme$subsystem", 00:21:24.167 "trtype": "$TEST_TRANSPORT", 00:21:24.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.167 "adrfam": "ipv4", 00:21:24.167 "trsvcid": "$NVMF_PORT", 00:21:24.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.167 "hdgst": ${hdgst:-false}, 00:21:24.167 "ddgst": ${ddgst:-false} 00:21:24.167 }, 00:21:24.167 "method": "bdev_nvme_attach_controller" 00:21:24.167 } 00:21:24.167 EOF 00:21:24.167 )") 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:24.167 { 00:21:24.167 "params": { 00:21:24.167 "name": "Nvme$subsystem", 00:21:24.167 "trtype": "$TEST_TRANSPORT", 00:21:24.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.167 "adrfam": "ipv4", 00:21:24.167 "trsvcid": "$NVMF_PORT", 00:21:24.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.167 "hdgst": ${hdgst:-false}, 00:21:24.167 "ddgst": ${ddgst:-false} 00:21:24.167 }, 00:21:24.167 "method": "bdev_nvme_attach_controller" 00:21:24.167 } 00:21:24.167 EOF 00:21:24.167 )") 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:24.167 { 00:21:24.167 "params": { 00:21:24.167 "name": "Nvme$subsystem", 00:21:24.167 "trtype": "$TEST_TRANSPORT", 00:21:24.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.167 "adrfam": "ipv4", 00:21:24.167 "trsvcid": "$NVMF_PORT", 00:21:24.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.167 "hdgst": ${hdgst:-false}, 00:21:24.167 "ddgst": ${ddgst:-false} 00:21:24.167 }, 00:21:24.167 "method": "bdev_nvme_attach_controller" 00:21:24.167 } 00:21:24.167 EOF 00:21:24.167 )") 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:24.167 { 00:21:24.167 "params": { 00:21:24.167 "name": "Nvme$subsystem", 00:21:24.167 "trtype": "$TEST_TRANSPORT", 00:21:24.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.167 "adrfam": "ipv4", 00:21:24.167 "trsvcid": "$NVMF_PORT", 00:21:24.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.167 "hdgst": ${hdgst:-false}, 00:21:24.167 "ddgst": ${ddgst:-false} 00:21:24.167 }, 00:21:24.167 "method": "bdev_nvme_attach_controller" 00:21:24.167 } 00:21:24.167 EOF 00:21:24.167 )") 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:24.167 { 00:21:24.167 "params": { 00:21:24.167 "name": "Nvme$subsystem", 00:21:24.167 "trtype": "$TEST_TRANSPORT", 00:21:24.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.167 "adrfam": "ipv4", 00:21:24.167 "trsvcid": "$NVMF_PORT", 00:21:24.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.167 "hdgst": ${hdgst:-false}, 00:21:24.167 "ddgst": ${ddgst:-false} 00:21:24.167 }, 00:21:24.167 "method": "bdev_nvme_attach_controller" 00:21:24.167 } 00:21:24.167 EOF 00:21:24.167 )") 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:24.167 { 00:21:24.167 "params": { 00:21:24.167 "name": "Nvme$subsystem", 00:21:24.167 "trtype": "$TEST_TRANSPORT", 00:21:24.167 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.167 "adrfam": "ipv4", 00:21:24.167 "trsvcid": "$NVMF_PORT", 00:21:24.167 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.167 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.167 "hdgst": ${hdgst:-false}, 00:21:24.167 "ddgst": ${ddgst:-false} 00:21:24.167 }, 00:21:24.167 "method": "bdev_nvme_attach_controller" 00:21:24.167 } 00:21:24.167 EOF 00:21:24.167 )") 00:21:24.167 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:24.425 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:24.425 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:24.425 { 00:21:24.425 "params": { 00:21:24.425 "name": "Nvme$subsystem", 00:21:24.425 "trtype": "$TEST_TRANSPORT", 00:21:24.425 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.425 "adrfam": "ipv4", 00:21:24.425 "trsvcid": "$NVMF_PORT", 00:21:24.425 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.425 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.426 "hdgst": ${hdgst:-false}, 00:21:24.426 "ddgst": ${ddgst:-false} 00:21:24.426 }, 00:21:24.426 "method": "bdev_nvme_attach_controller" 00:21:24.426 } 00:21:24.426 EOF 00:21:24.426 )") 00:21:24.426 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:24.426 [2024-11-20 10:39:04.897785] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:21:24.426 [2024-11-20 10:39:04.897837] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:24.426 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:24.426 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:24.426 { 00:21:24.426 "params": { 00:21:24.426 "name": "Nvme$subsystem", 00:21:24.426 "trtype": "$TEST_TRANSPORT", 00:21:24.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.426 "adrfam": "ipv4", 00:21:24.426 "trsvcid": "$NVMF_PORT", 00:21:24.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.426 "hdgst": ${hdgst:-false}, 00:21:24.426 "ddgst": ${ddgst:-false} 00:21:24.426 }, 00:21:24.426 "method": "bdev_nvme_attach_controller" 00:21:24.426 } 00:21:24.426 EOF 00:21:24.426 )") 00:21:24.426 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:24.426 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:24.426 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:24.426 { 00:21:24.426 "params": { 00:21:24.426 "name": "Nvme$subsystem", 00:21:24.426 "trtype": "$TEST_TRANSPORT", 00:21:24.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.426 "adrfam": "ipv4", 00:21:24.426 "trsvcid": "$NVMF_PORT", 00:21:24.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.426 "hdgst": ${hdgst:-false}, 00:21:24.426 "ddgst": ${ddgst:-false} 00:21:24.426 }, 00:21:24.426 "method": "bdev_nvme_attach_controller" 00:21:24.426 } 00:21:24.426 EOF 00:21:24.426 )") 00:21:24.426 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:24.426 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:24.426 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:24.426 { 00:21:24.426 "params": { 00:21:24.426 "name": "Nvme$subsystem", 00:21:24.426 "trtype": "$TEST_TRANSPORT", 00:21:24.426 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:24.426 "adrfam": "ipv4", 00:21:24.426 "trsvcid": "$NVMF_PORT", 00:21:24.426 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:24.426 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:24.426 "hdgst": ${hdgst:-false}, 00:21:24.426 "ddgst": ${ddgst:-false} 00:21:24.426 }, 00:21:24.426 "method": "bdev_nvme_attach_controller" 00:21:24.426 } 00:21:24.426 EOF 00:21:24.426 )") 00:21:24.426 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:24.426 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # jq . 00:21:24.426 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@397 -- # IFS=, 00:21:24.426 10:39:04 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:21:24.426 "params": { 00:21:24.426 "name": "Nvme1", 00:21:24.426 "trtype": "tcp", 00:21:24.426 "traddr": "10.0.0.2", 00:21:24.426 "adrfam": "ipv4", 00:21:24.426 "trsvcid": "4420", 00:21:24.426 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:24.426 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:24.426 "hdgst": false, 00:21:24.426 "ddgst": false 00:21:24.426 }, 00:21:24.426 "method": "bdev_nvme_attach_controller" 00:21:24.426 },{ 00:21:24.426 "params": { 00:21:24.426 "name": "Nvme2", 00:21:24.426 "trtype": "tcp", 00:21:24.426 "traddr": "10.0.0.2", 00:21:24.426 "adrfam": "ipv4", 00:21:24.426 "trsvcid": "4420", 00:21:24.426 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:24.426 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:24.426 "hdgst": false, 00:21:24.426 "ddgst": false 00:21:24.426 }, 00:21:24.426 "method": "bdev_nvme_attach_controller" 00:21:24.426 },{ 00:21:24.426 "params": { 00:21:24.426 "name": "Nvme3", 00:21:24.426 "trtype": "tcp", 00:21:24.426 "traddr": "10.0.0.2", 00:21:24.426 "adrfam": "ipv4", 00:21:24.426 "trsvcid": "4420", 00:21:24.426 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:24.426 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:24.426 "hdgst": false, 00:21:24.426 "ddgst": false 00:21:24.426 }, 00:21:24.426 "method": "bdev_nvme_attach_controller" 00:21:24.426 },{ 00:21:24.426 "params": { 00:21:24.426 "name": "Nvme4", 00:21:24.426 "trtype": "tcp", 00:21:24.426 "traddr": "10.0.0.2", 00:21:24.426 "adrfam": "ipv4", 00:21:24.426 "trsvcid": "4420", 00:21:24.426 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:24.426 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:24.426 "hdgst": false, 00:21:24.426 "ddgst": false 00:21:24.426 }, 00:21:24.426 "method": "bdev_nvme_attach_controller" 00:21:24.426 },{ 00:21:24.426 "params": { 00:21:24.426 "name": "Nvme5", 00:21:24.426 "trtype": "tcp", 00:21:24.426 "traddr": "10.0.0.2", 00:21:24.426 "adrfam": "ipv4", 00:21:24.426 "trsvcid": "4420", 00:21:24.426 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:24.426 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:24.426 "hdgst": false, 00:21:24.426 "ddgst": false 00:21:24.426 }, 00:21:24.426 "method": "bdev_nvme_attach_controller" 00:21:24.426 },{ 00:21:24.426 "params": { 00:21:24.426 "name": "Nvme6", 00:21:24.426 "trtype": "tcp", 00:21:24.426 "traddr": "10.0.0.2", 00:21:24.426 "adrfam": "ipv4", 00:21:24.426 "trsvcid": "4420", 00:21:24.426 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:24.426 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:24.426 "hdgst": false, 00:21:24.426 "ddgst": false 00:21:24.426 }, 00:21:24.426 "method": "bdev_nvme_attach_controller" 00:21:24.426 },{ 00:21:24.426 "params": { 00:21:24.426 "name": "Nvme7", 00:21:24.426 "trtype": "tcp", 00:21:24.426 "traddr": "10.0.0.2", 00:21:24.426 "adrfam": "ipv4", 00:21:24.426 "trsvcid": "4420", 00:21:24.426 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:24.426 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:24.426 "hdgst": false, 00:21:24.426 "ddgst": false 00:21:24.426 }, 00:21:24.426 "method": "bdev_nvme_attach_controller" 00:21:24.426 },{ 00:21:24.426 "params": { 00:21:24.427 "name": "Nvme8", 00:21:24.427 "trtype": "tcp", 00:21:24.427 "traddr": "10.0.0.2", 00:21:24.427 "adrfam": "ipv4", 00:21:24.427 "trsvcid": "4420", 00:21:24.427 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:24.427 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:24.427 "hdgst": false, 00:21:24.427 "ddgst": false 00:21:24.427 }, 00:21:24.427 "method": "bdev_nvme_attach_controller" 00:21:24.427 },{ 00:21:24.427 "params": { 00:21:24.427 "name": "Nvme9", 00:21:24.427 "trtype": "tcp", 00:21:24.427 "traddr": "10.0.0.2", 00:21:24.427 "adrfam": "ipv4", 00:21:24.427 "trsvcid": "4420", 00:21:24.427 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:24.427 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:24.427 "hdgst": false, 00:21:24.427 "ddgst": false 00:21:24.427 }, 00:21:24.427 "method": "bdev_nvme_attach_controller" 00:21:24.427 },{ 00:21:24.427 "params": { 00:21:24.427 "name": "Nvme10", 00:21:24.427 "trtype": "tcp", 00:21:24.427 "traddr": "10.0.0.2", 00:21:24.427 "adrfam": "ipv4", 00:21:24.427 "trsvcid": "4420", 00:21:24.427 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:24.427 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:24.427 "hdgst": false, 00:21:24.427 "ddgst": false 00:21:24.427 }, 00:21:24.427 "method": "bdev_nvme_attach_controller" 00:21:24.427 }' 00:21:24.427 [2024-11-20 10:39:04.978389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:24.427 [2024-11-20 10:39:05.019449] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:26.338 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:26.338 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@868 -- # return 0 00:21:26.338 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@81 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:26.338 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:26.338 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:26.338 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:26.338 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # kill -9 3282463 00:21:26.338 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@85 -- # rm -f /var/run/spdk_bdev1 00:21:26.338 10:39:06 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # sleep 1 00:21:27.272 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 74: 3282463 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@89 -- # kill -0 3282178 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@92 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # config=() 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # local subsystem config 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:27.272 { 00:21:27.272 "params": { 00:21:27.272 "name": "Nvme$subsystem", 00:21:27.272 "trtype": "$TEST_TRANSPORT", 00:21:27.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.272 "adrfam": "ipv4", 00:21:27.272 "trsvcid": "$NVMF_PORT", 00:21:27.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.272 "hdgst": ${hdgst:-false}, 00:21:27.272 "ddgst": ${ddgst:-false} 00:21:27.272 }, 00:21:27.272 "method": "bdev_nvme_attach_controller" 00:21:27.272 } 00:21:27.272 EOF 00:21:27.272 )") 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:27.272 { 00:21:27.272 "params": { 00:21:27.272 "name": "Nvme$subsystem", 00:21:27.272 "trtype": "$TEST_TRANSPORT", 00:21:27.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.272 "adrfam": "ipv4", 00:21:27.272 "trsvcid": "$NVMF_PORT", 00:21:27.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.272 "hdgst": ${hdgst:-false}, 00:21:27.272 "ddgst": ${ddgst:-false} 00:21:27.272 }, 00:21:27.272 "method": "bdev_nvme_attach_controller" 00:21:27.272 } 00:21:27.272 EOF 00:21:27.272 )") 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:27.272 { 00:21:27.272 "params": { 00:21:27.272 "name": "Nvme$subsystem", 00:21:27.272 "trtype": "$TEST_TRANSPORT", 00:21:27.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.272 "adrfam": "ipv4", 00:21:27.272 "trsvcid": "$NVMF_PORT", 00:21:27.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.272 "hdgst": ${hdgst:-false}, 00:21:27.272 "ddgst": ${ddgst:-false} 00:21:27.272 }, 00:21:27.272 "method": "bdev_nvme_attach_controller" 00:21:27.272 } 00:21:27.272 EOF 00:21:27.272 )") 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:27.272 { 00:21:27.272 "params": { 00:21:27.272 "name": "Nvme$subsystem", 00:21:27.272 "trtype": "$TEST_TRANSPORT", 00:21:27.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.272 "adrfam": "ipv4", 00:21:27.272 "trsvcid": "$NVMF_PORT", 00:21:27.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.272 "hdgst": ${hdgst:-false}, 00:21:27.272 "ddgst": ${ddgst:-false} 00:21:27.272 }, 00:21:27.272 "method": "bdev_nvme_attach_controller" 00:21:27.272 } 00:21:27.272 EOF 00:21:27.272 )") 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:27.272 { 00:21:27.272 "params": { 00:21:27.272 "name": "Nvme$subsystem", 00:21:27.272 "trtype": "$TEST_TRANSPORT", 00:21:27.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.272 "adrfam": "ipv4", 00:21:27.272 "trsvcid": "$NVMF_PORT", 00:21:27.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.272 "hdgst": ${hdgst:-false}, 00:21:27.272 "ddgst": ${ddgst:-false} 00:21:27.272 }, 00:21:27.272 "method": "bdev_nvme_attach_controller" 00:21:27.272 } 00:21:27.272 EOF 00:21:27.272 )") 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:27.272 { 00:21:27.272 "params": { 00:21:27.272 "name": "Nvme$subsystem", 00:21:27.272 "trtype": "$TEST_TRANSPORT", 00:21:27.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.272 "adrfam": "ipv4", 00:21:27.272 "trsvcid": "$NVMF_PORT", 00:21:27.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.272 "hdgst": ${hdgst:-false}, 00:21:27.272 "ddgst": ${ddgst:-false} 00:21:27.272 }, 00:21:27.272 "method": "bdev_nvme_attach_controller" 00:21:27.272 } 00:21:27.272 EOF 00:21:27.272 )") 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:27.272 { 00:21:27.272 "params": { 00:21:27.272 "name": "Nvme$subsystem", 00:21:27.272 "trtype": "$TEST_TRANSPORT", 00:21:27.272 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.272 "adrfam": "ipv4", 00:21:27.272 "trsvcid": "$NVMF_PORT", 00:21:27.272 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.272 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.272 "hdgst": ${hdgst:-false}, 00:21:27.272 "ddgst": ${ddgst:-false} 00:21:27.272 }, 00:21:27.272 "method": "bdev_nvme_attach_controller" 00:21:27.272 } 00:21:27.272 EOF 00:21:27.272 )") 00:21:27.272 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:27.272 [2024-11-20 10:39:07.856111] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:21:27.272 [2024-11-20 10:39:07.856165] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3283364 ] 00:21:27.273 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:27.273 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:27.273 { 00:21:27.273 "params": { 00:21:27.273 "name": "Nvme$subsystem", 00:21:27.273 "trtype": "$TEST_TRANSPORT", 00:21:27.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.273 "adrfam": "ipv4", 00:21:27.273 "trsvcid": "$NVMF_PORT", 00:21:27.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.273 "hdgst": ${hdgst:-false}, 00:21:27.273 "ddgst": ${ddgst:-false} 00:21:27.273 }, 00:21:27.273 "method": "bdev_nvme_attach_controller" 00:21:27.273 } 00:21:27.273 EOF 00:21:27.273 )") 00:21:27.273 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:27.273 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:27.273 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:27.273 { 00:21:27.273 "params": { 00:21:27.273 "name": "Nvme$subsystem", 00:21:27.273 "trtype": "$TEST_TRANSPORT", 00:21:27.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.273 "adrfam": "ipv4", 00:21:27.273 "trsvcid": "$NVMF_PORT", 00:21:27.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.273 "hdgst": ${hdgst:-false}, 00:21:27.273 "ddgst": ${ddgst:-false} 00:21:27.273 }, 00:21:27.273 "method": "bdev_nvme_attach_controller" 00:21:27.273 } 00:21:27.273 EOF 00:21:27.273 )") 00:21:27.273 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:27.273 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:27.273 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:27.273 { 00:21:27.273 "params": { 00:21:27.273 "name": "Nvme$subsystem", 00:21:27.273 "trtype": "$TEST_TRANSPORT", 00:21:27.273 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:27.273 "adrfam": "ipv4", 00:21:27.273 "trsvcid": "$NVMF_PORT", 00:21:27.273 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:27.273 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:27.273 "hdgst": ${hdgst:-false}, 00:21:27.273 "ddgst": ${ddgst:-false} 00:21:27.273 }, 00:21:27.273 "method": "bdev_nvme_attach_controller" 00:21:27.273 } 00:21:27.273 EOF 00:21:27.273 )") 00:21:27.273 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # cat 00:21:27.273 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@396 -- # jq . 00:21:27.273 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@397 -- # IFS=, 00:21:27.273 10:39:07 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:21:27.273 "params": { 00:21:27.273 "name": "Nvme1", 00:21:27.273 "trtype": "tcp", 00:21:27.273 "traddr": "10.0.0.2", 00:21:27.273 "adrfam": "ipv4", 00:21:27.273 "trsvcid": "4420", 00:21:27.273 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:27.273 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:27.273 "hdgst": false, 00:21:27.273 "ddgst": false 00:21:27.273 }, 00:21:27.273 "method": "bdev_nvme_attach_controller" 00:21:27.273 },{ 00:21:27.273 "params": { 00:21:27.273 "name": "Nvme2", 00:21:27.273 "trtype": "tcp", 00:21:27.273 "traddr": "10.0.0.2", 00:21:27.273 "adrfam": "ipv4", 00:21:27.273 "trsvcid": "4420", 00:21:27.273 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:27.273 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:27.273 "hdgst": false, 00:21:27.273 "ddgst": false 00:21:27.273 }, 00:21:27.273 "method": "bdev_nvme_attach_controller" 00:21:27.273 },{ 00:21:27.273 "params": { 00:21:27.273 "name": "Nvme3", 00:21:27.273 "trtype": "tcp", 00:21:27.273 "traddr": "10.0.0.2", 00:21:27.273 "adrfam": "ipv4", 00:21:27.273 "trsvcid": "4420", 00:21:27.273 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:27.273 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:27.273 "hdgst": false, 00:21:27.273 "ddgst": false 00:21:27.273 }, 00:21:27.273 "method": "bdev_nvme_attach_controller" 00:21:27.273 },{ 00:21:27.273 "params": { 00:21:27.273 "name": "Nvme4", 00:21:27.273 "trtype": "tcp", 00:21:27.273 "traddr": "10.0.0.2", 00:21:27.273 "adrfam": "ipv4", 00:21:27.273 "trsvcid": "4420", 00:21:27.273 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:27.273 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:27.273 "hdgst": false, 00:21:27.273 "ddgst": false 00:21:27.273 }, 00:21:27.273 "method": "bdev_nvme_attach_controller" 00:21:27.273 },{ 00:21:27.273 "params": { 00:21:27.273 "name": "Nvme5", 00:21:27.273 "trtype": "tcp", 00:21:27.273 "traddr": "10.0.0.2", 00:21:27.273 "adrfam": "ipv4", 00:21:27.273 "trsvcid": "4420", 00:21:27.273 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:27.273 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:27.273 "hdgst": false, 00:21:27.273 "ddgst": false 00:21:27.273 }, 00:21:27.273 "method": "bdev_nvme_attach_controller" 00:21:27.273 },{ 00:21:27.273 "params": { 00:21:27.273 "name": "Nvme6", 00:21:27.273 "trtype": "tcp", 00:21:27.273 "traddr": "10.0.0.2", 00:21:27.273 "adrfam": "ipv4", 00:21:27.273 "trsvcid": "4420", 00:21:27.273 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:27.273 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:27.273 "hdgst": false, 00:21:27.273 "ddgst": false 00:21:27.273 }, 00:21:27.273 "method": "bdev_nvme_attach_controller" 00:21:27.273 },{ 00:21:27.273 "params": { 00:21:27.273 "name": "Nvme7", 00:21:27.273 "trtype": "tcp", 00:21:27.273 "traddr": "10.0.0.2", 00:21:27.273 "adrfam": "ipv4", 00:21:27.273 "trsvcid": "4420", 00:21:27.273 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:27.273 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:27.273 "hdgst": false, 00:21:27.273 "ddgst": false 00:21:27.273 }, 00:21:27.273 "method": "bdev_nvme_attach_controller" 00:21:27.273 },{ 00:21:27.273 "params": { 00:21:27.273 "name": "Nvme8", 00:21:27.273 "trtype": "tcp", 00:21:27.273 "traddr": "10.0.0.2", 00:21:27.273 "adrfam": "ipv4", 00:21:27.273 "trsvcid": "4420", 00:21:27.273 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:27.273 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:27.273 "hdgst": false, 00:21:27.273 "ddgst": false 00:21:27.273 }, 00:21:27.273 "method": "bdev_nvme_attach_controller" 00:21:27.273 },{ 00:21:27.273 "params": { 00:21:27.273 "name": "Nvme9", 00:21:27.273 "trtype": "tcp", 00:21:27.273 "traddr": "10.0.0.2", 00:21:27.273 "adrfam": "ipv4", 00:21:27.273 "trsvcid": "4420", 00:21:27.273 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:27.273 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:27.273 "hdgst": false, 00:21:27.273 "ddgst": false 00:21:27.273 }, 00:21:27.273 "method": "bdev_nvme_attach_controller" 00:21:27.273 },{ 00:21:27.273 "params": { 00:21:27.273 "name": "Nvme10", 00:21:27.273 "trtype": "tcp", 00:21:27.273 "traddr": "10.0.0.2", 00:21:27.273 "adrfam": "ipv4", 00:21:27.273 "trsvcid": "4420", 00:21:27.273 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:27.273 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:27.273 "hdgst": false, 00:21:27.273 "ddgst": false 00:21:27.273 }, 00:21:27.273 "method": "bdev_nvme_attach_controller" 00:21:27.273 }' 00:21:27.273 [2024-11-20 10:39:07.937240] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:27.273 [2024-11-20 10:39:07.978066] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:29.175 Running I/O for 1 seconds... 00:21:30.108 2186.00 IOPS, 136.62 MiB/s 00:21:30.108 Latency(us) 00:21:30.108 [2024-11-20T09:39:10.839Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:30.108 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.108 Verification LBA range: start 0x0 length 0x400 00:21:30.108 Nvme1n1 : 1.14 281.68 17.61 0.00 0.00 225005.86 15666.22 206719.27 00:21:30.108 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.108 Verification LBA range: start 0x0 length 0x400 00:21:30.108 Nvme2n1 : 1.08 237.44 14.84 0.00 0.00 262862.75 19099.06 231685.36 00:21:30.108 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.108 Verification LBA range: start 0x0 length 0x400 00:21:30.108 Nvme3n1 : 1.13 287.74 17.98 0.00 0.00 212680.21 14355.50 216705.71 00:21:30.108 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.108 Verification LBA range: start 0x0 length 0x400 00:21:30.108 Nvme4n1 : 1.14 280.44 17.53 0.00 0.00 216741.69 12919.95 216705.71 00:21:30.108 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.108 Verification LBA range: start 0x0 length 0x400 00:21:30.108 Nvme5n1 : 1.16 276.81 17.30 0.00 0.00 216615.59 16103.13 208716.56 00:21:30.108 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.108 Verification LBA range: start 0x0 length 0x400 00:21:30.108 Nvme6n1 : 1.15 278.11 17.38 0.00 0.00 212433.09 15791.06 228689.43 00:21:30.108 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.108 Verification LBA range: start 0x0 length 0x400 00:21:30.108 Nvme7n1 : 1.15 279.37 17.46 0.00 0.00 208337.77 24716.43 206719.27 00:21:30.108 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.108 Verification LBA range: start 0x0 length 0x400 00:21:30.108 Nvme8n1 : 1.15 277.30 17.33 0.00 0.00 207006.77 14917.24 218702.99 00:21:30.108 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.108 Verification LBA range: start 0x0 length 0x400 00:21:30.108 Nvme9n1 : 1.16 280.92 17.56 0.00 0.00 201375.04 1732.02 219701.64 00:21:30.108 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:30.108 Verification LBA range: start 0x0 length 0x400 00:21:30.108 Nvme10n1 : 1.16 274.90 17.18 0.00 0.00 202907.55 14480.34 230686.72 00:21:30.108 [2024-11-20T09:39:10.839Z] =================================================================================================================== 00:21:30.108 [2024-11-20T09:39:10.839Z] Total : 2754.71 172.17 0.00 0.00 215621.42 1732.02 231685.36 00:21:30.108 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@95 -- # stoptarget 00:21:30.108 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:30.108 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:30.108 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@99 -- # sync 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@102 -- # set +e 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:30.366 rmmod nvme_tcp 00:21:30.366 rmmod nvme_fabrics 00:21:30.366 rmmod nvme_keyring 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@106 -- # set -e 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@107 -- # return 0 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@336 -- # '[' -n 3282178 ']' 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@337 -- # killprocess 3282178 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # '[' -z 3282178 ']' 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # kill -0 3282178 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # uname 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3282178 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3282178' 00:21:30.366 killing process with pid 3282178 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@973 -- # kill 3282178 00:21:30.366 10:39:10 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@978 -- # wait 3282178 00:21:30.625 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:30.625 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # nvmf_fini 00:21:30.625 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@264 -- # local dev 00:21:30.625 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@267 -- # remove_target_ns 00:21:30.625 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:30.625 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:30.625 10:39:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@130 -- # return 0 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@41 -- # _dev=0 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@41 -- # dev_map=() 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/setup.sh@284 -- # iptr 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@542 -- # iptables-save 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@542 -- # iptables-restore 00:21:33.162 00:21:33.162 real 0m16.130s 00:21:33.162 user 0m37.295s 00:21:33.162 sys 0m5.852s 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:33.162 ************************************ 00:21:33.162 END TEST nvmf_shutdown_tc1 00:21:33.162 ************************************ 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@163 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:33.162 ************************************ 00:21:33.162 START TEST nvmf_shutdown_tc2 00:21:33.162 ************************************ 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc2 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@100 -- # starttarget 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # remove_target_ns 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # xtrace_disable 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@131 -- # pci_devs=() 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@131 -- # local -a pci_devs 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@133 -- # pci_drivers=() 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@135 -- # net_devs=() 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@135 -- # local -ga net_devs 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@136 -- # e810=() 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@136 -- # local -ga e810 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@137 -- # x722=() 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@137 -- # local -ga x722 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@138 -- # mlx=() 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@138 -- # local -ga mlx 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:33.162 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:33.163 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:33.163 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:33.163 Found net devices under 0000:86:00.0: cvl_0_0 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:33.163 Found net devices under 0000:86:00.1: cvl_0_1 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@262 -- # is_hw=yes 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@257 -- # create_target_ns 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@28 -- # local -g _dev 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@44 -- # ips=() 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@11 -- # local val=167772161 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:21:33.163 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:21:33.163 10.0.0.1 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@11 -- # local val=167772162 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:21:33.164 10.0.0.2 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@38 -- # ping_ips 1 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:33.164 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:33.164 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.351 ms 00:21:33.164 00:21:33.164 --- 10.0.0.1 ping statistics --- 00:21:33.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.164 rtt min/avg/max/mdev = 0.351/0.351/0.351/0.000 ms 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=target0 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:21:33.164 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:33.164 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.243 ms 00:21:33.164 00:21:33.164 --- 10.0.0.2 ping statistics --- 00:21:33.164 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:33.164 rtt min/avg/max/mdev = 0.243/0.243/0.243/0.000 ms 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # return 0 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:21:33.164 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=initiator1 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # return 1 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev= 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@169 -- # return 0 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=target0 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@107 -- # local dev=target1 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@109 -- # return 1 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@168 -- # dev= 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@169 -- # return 0 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:33.165 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:33.424 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:33.424 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:33.424 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:33.424 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.424 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@328 -- # nvmfpid=3284595 00:21:33.424 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # waitforlisten 3284595 00:21:33.424 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:33.424 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3284595 ']' 00:21:33.424 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:33.424 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:33.424 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:33.424 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:33.424 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:33.424 10:39:13 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:33.424 [2024-11-20 10:39:13.985517] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:21:33.424 [2024-11-20 10:39:13.985565] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:33.424 [2024-11-20 10:39:14.064463] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:33.424 [2024-11-20 10:39:14.106232] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:33.424 [2024-11-20 10:39:14.106268] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:33.424 [2024-11-20 10:39:14.106275] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:33.424 [2024-11-20 10:39:14.106281] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:33.424 [2024-11-20 10:39:14.106285] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:33.424 [2024-11-20 10:39:14.107886] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:33.424 [2024-11-20 10:39:14.107998] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:33.424 [2024-11-20 10:39:14.108105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:33.424 [2024-11-20 10:39:14.108105] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:34.359 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.360 [2024-11-20 10:39:14.858569] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@29 -- # cat 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:34.360 10:39:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.360 Malloc1 00:21:34.360 [2024-11-20 10:39:14.968167] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:34.360 Malloc2 00:21:34.360 Malloc3 00:21:34.360 Malloc4 00:21:34.618 Malloc5 00:21:34.618 Malloc6 00:21:34.618 Malloc7 00:21:34.618 Malloc8 00:21:34.618 Malloc9 00:21:34.618 Malloc10 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # perfpid=3284874 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # waitforlisten 3284874 /var/tmp/bdevperf.sock 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3284874 ']' 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:34.878 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # config=() 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # local subsystem config 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:34.878 { 00:21:34.878 "params": { 00:21:34.878 "name": "Nvme$subsystem", 00:21:34.878 "trtype": "$TEST_TRANSPORT", 00:21:34.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.878 "adrfam": "ipv4", 00:21:34.878 "trsvcid": "$NVMF_PORT", 00:21:34.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.878 "hdgst": ${hdgst:-false}, 00:21:34.878 "ddgst": ${ddgst:-false} 00:21:34.878 }, 00:21:34.878 "method": "bdev_nvme_attach_controller" 00:21:34.878 } 00:21:34.878 EOF 00:21:34.878 )") 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:34.878 { 00:21:34.878 "params": { 00:21:34.878 "name": "Nvme$subsystem", 00:21:34.878 "trtype": "$TEST_TRANSPORT", 00:21:34.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.878 "adrfam": "ipv4", 00:21:34.878 "trsvcid": "$NVMF_PORT", 00:21:34.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.878 "hdgst": ${hdgst:-false}, 00:21:34.878 "ddgst": ${ddgst:-false} 00:21:34.878 }, 00:21:34.878 "method": "bdev_nvme_attach_controller" 00:21:34.878 } 00:21:34.878 EOF 00:21:34.878 )") 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:34.878 { 00:21:34.878 "params": { 00:21:34.878 "name": "Nvme$subsystem", 00:21:34.878 "trtype": "$TEST_TRANSPORT", 00:21:34.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.878 "adrfam": "ipv4", 00:21:34.878 "trsvcid": "$NVMF_PORT", 00:21:34.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.878 "hdgst": ${hdgst:-false}, 00:21:34.878 "ddgst": ${ddgst:-false} 00:21:34.878 }, 00:21:34.878 "method": "bdev_nvme_attach_controller" 00:21:34.878 } 00:21:34.878 EOF 00:21:34.878 )") 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:34.878 { 00:21:34.878 "params": { 00:21:34.878 "name": "Nvme$subsystem", 00:21:34.878 "trtype": "$TEST_TRANSPORT", 00:21:34.878 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.878 "adrfam": "ipv4", 00:21:34.878 "trsvcid": "$NVMF_PORT", 00:21:34.878 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.878 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.878 "hdgst": ${hdgst:-false}, 00:21:34.878 "ddgst": ${ddgst:-false} 00:21:34.878 }, 00:21:34.878 "method": "bdev_nvme_attach_controller" 00:21:34.878 } 00:21:34.878 EOF 00:21:34.878 )") 00:21:34.878 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:34.879 { 00:21:34.879 "params": { 00:21:34.879 "name": "Nvme$subsystem", 00:21:34.879 "trtype": "$TEST_TRANSPORT", 00:21:34.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.879 "adrfam": "ipv4", 00:21:34.879 "trsvcid": "$NVMF_PORT", 00:21:34.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.879 "hdgst": ${hdgst:-false}, 00:21:34.879 "ddgst": ${ddgst:-false} 00:21:34.879 }, 00:21:34.879 "method": "bdev_nvme_attach_controller" 00:21:34.879 } 00:21:34.879 EOF 00:21:34.879 )") 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:34.879 { 00:21:34.879 "params": { 00:21:34.879 "name": "Nvme$subsystem", 00:21:34.879 "trtype": "$TEST_TRANSPORT", 00:21:34.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.879 "adrfam": "ipv4", 00:21:34.879 "trsvcid": "$NVMF_PORT", 00:21:34.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.879 "hdgst": ${hdgst:-false}, 00:21:34.879 "ddgst": ${ddgst:-false} 00:21:34.879 }, 00:21:34.879 "method": "bdev_nvme_attach_controller" 00:21:34.879 } 00:21:34.879 EOF 00:21:34.879 )") 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:34.879 { 00:21:34.879 "params": { 00:21:34.879 "name": "Nvme$subsystem", 00:21:34.879 "trtype": "$TEST_TRANSPORT", 00:21:34.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.879 "adrfam": "ipv4", 00:21:34.879 "trsvcid": "$NVMF_PORT", 00:21:34.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.879 "hdgst": ${hdgst:-false}, 00:21:34.879 "ddgst": ${ddgst:-false} 00:21:34.879 }, 00:21:34.879 "method": "bdev_nvme_attach_controller" 00:21:34.879 } 00:21:34.879 EOF 00:21:34.879 )") 00:21:34.879 [2024-11-20 10:39:15.443238] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:21:34.879 [2024-11-20 10:39:15.443290] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3284874 ] 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:34.879 { 00:21:34.879 "params": { 00:21:34.879 "name": "Nvme$subsystem", 00:21:34.879 "trtype": "$TEST_TRANSPORT", 00:21:34.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.879 "adrfam": "ipv4", 00:21:34.879 "trsvcid": "$NVMF_PORT", 00:21:34.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.879 "hdgst": ${hdgst:-false}, 00:21:34.879 "ddgst": ${ddgst:-false} 00:21:34.879 }, 00:21:34.879 "method": "bdev_nvme_attach_controller" 00:21:34.879 } 00:21:34.879 EOF 00:21:34.879 )") 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:34.879 { 00:21:34.879 "params": { 00:21:34.879 "name": "Nvme$subsystem", 00:21:34.879 "trtype": "$TEST_TRANSPORT", 00:21:34.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.879 "adrfam": "ipv4", 00:21:34.879 "trsvcid": "$NVMF_PORT", 00:21:34.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.879 "hdgst": ${hdgst:-false}, 00:21:34.879 "ddgst": ${ddgst:-false} 00:21:34.879 }, 00:21:34.879 "method": "bdev_nvme_attach_controller" 00:21:34.879 } 00:21:34.879 EOF 00:21:34.879 )") 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:34.879 { 00:21:34.879 "params": { 00:21:34.879 "name": "Nvme$subsystem", 00:21:34.879 "trtype": "$TEST_TRANSPORT", 00:21:34.879 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:34.879 "adrfam": "ipv4", 00:21:34.879 "trsvcid": "$NVMF_PORT", 00:21:34.879 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:34.879 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:34.879 "hdgst": ${hdgst:-false}, 00:21:34.879 "ddgst": ${ddgst:-false} 00:21:34.879 }, 00:21:34.879 "method": "bdev_nvme_attach_controller" 00:21:34.879 } 00:21:34.879 EOF 00:21:34.879 )") 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # cat 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@396 -- # jq . 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@397 -- # IFS=, 00:21:34.879 10:39:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:21:34.879 "params": { 00:21:34.879 "name": "Nvme1", 00:21:34.879 "trtype": "tcp", 00:21:34.879 "traddr": "10.0.0.2", 00:21:34.879 "adrfam": "ipv4", 00:21:34.879 "trsvcid": "4420", 00:21:34.879 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:34.879 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:34.879 "hdgst": false, 00:21:34.879 "ddgst": false 00:21:34.879 }, 00:21:34.879 "method": "bdev_nvme_attach_controller" 00:21:34.879 },{ 00:21:34.879 "params": { 00:21:34.879 "name": "Nvme2", 00:21:34.879 "trtype": "tcp", 00:21:34.879 "traddr": "10.0.0.2", 00:21:34.879 "adrfam": "ipv4", 00:21:34.879 "trsvcid": "4420", 00:21:34.879 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:34.879 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:34.879 "hdgst": false, 00:21:34.879 "ddgst": false 00:21:34.879 }, 00:21:34.879 "method": "bdev_nvme_attach_controller" 00:21:34.879 },{ 00:21:34.879 "params": { 00:21:34.879 "name": "Nvme3", 00:21:34.879 "trtype": "tcp", 00:21:34.879 "traddr": "10.0.0.2", 00:21:34.879 "adrfam": "ipv4", 00:21:34.879 "trsvcid": "4420", 00:21:34.879 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:34.879 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:34.879 "hdgst": false, 00:21:34.879 "ddgst": false 00:21:34.879 }, 00:21:34.879 "method": "bdev_nvme_attach_controller" 00:21:34.879 },{ 00:21:34.879 "params": { 00:21:34.879 "name": "Nvme4", 00:21:34.879 "trtype": "tcp", 00:21:34.879 "traddr": "10.0.0.2", 00:21:34.879 "adrfam": "ipv4", 00:21:34.879 "trsvcid": "4420", 00:21:34.879 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:34.879 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:34.879 "hdgst": false, 00:21:34.879 "ddgst": false 00:21:34.879 }, 00:21:34.879 "method": "bdev_nvme_attach_controller" 00:21:34.879 },{ 00:21:34.879 "params": { 00:21:34.879 "name": "Nvme5", 00:21:34.879 "trtype": "tcp", 00:21:34.879 "traddr": "10.0.0.2", 00:21:34.879 "adrfam": "ipv4", 00:21:34.879 "trsvcid": "4420", 00:21:34.879 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:34.879 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:34.879 "hdgst": false, 00:21:34.879 "ddgst": false 00:21:34.879 }, 00:21:34.879 "method": "bdev_nvme_attach_controller" 00:21:34.879 },{ 00:21:34.879 "params": { 00:21:34.879 "name": "Nvme6", 00:21:34.879 "trtype": "tcp", 00:21:34.879 "traddr": "10.0.0.2", 00:21:34.879 "adrfam": "ipv4", 00:21:34.879 "trsvcid": "4420", 00:21:34.879 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:34.879 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:34.879 "hdgst": false, 00:21:34.879 "ddgst": false 00:21:34.879 }, 00:21:34.879 "method": "bdev_nvme_attach_controller" 00:21:34.879 },{ 00:21:34.879 "params": { 00:21:34.879 "name": "Nvme7", 00:21:34.879 "trtype": "tcp", 00:21:34.879 "traddr": "10.0.0.2", 00:21:34.879 "adrfam": "ipv4", 00:21:34.879 "trsvcid": "4420", 00:21:34.879 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:34.879 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:34.879 "hdgst": false, 00:21:34.879 "ddgst": false 00:21:34.879 }, 00:21:34.879 "method": "bdev_nvme_attach_controller" 00:21:34.879 },{ 00:21:34.879 "params": { 00:21:34.879 "name": "Nvme8", 00:21:34.879 "trtype": "tcp", 00:21:34.879 "traddr": "10.0.0.2", 00:21:34.879 "adrfam": "ipv4", 00:21:34.879 "trsvcid": "4420", 00:21:34.880 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:34.880 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:34.880 "hdgst": false, 00:21:34.880 "ddgst": false 00:21:34.880 }, 00:21:34.880 "method": "bdev_nvme_attach_controller" 00:21:34.880 },{ 00:21:34.880 "params": { 00:21:34.880 "name": "Nvme9", 00:21:34.880 "trtype": "tcp", 00:21:34.880 "traddr": "10.0.0.2", 00:21:34.880 "adrfam": "ipv4", 00:21:34.880 "trsvcid": "4420", 00:21:34.880 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:34.880 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:34.880 "hdgst": false, 00:21:34.880 "ddgst": false 00:21:34.880 }, 00:21:34.880 "method": "bdev_nvme_attach_controller" 00:21:34.880 },{ 00:21:34.880 "params": { 00:21:34.880 "name": "Nvme10", 00:21:34.880 "trtype": "tcp", 00:21:34.880 "traddr": "10.0.0.2", 00:21:34.880 "adrfam": "ipv4", 00:21:34.880 "trsvcid": "4420", 00:21:34.880 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:34.880 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:34.880 "hdgst": false, 00:21:34.880 "ddgst": false 00:21:34.880 }, 00:21:34.880 "method": "bdev_nvme_attach_controller" 00:21:34.880 }' 00:21:34.880 [2024-11-20 10:39:15.520985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:34.880 [2024-11-20 10:39:15.561807] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:36.252 Running I/O for 10 seconds... 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@868 -- # return 0 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@106 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@108 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local ret=1 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # local i 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:36.818 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # ret=0 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@66 -- # break 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@70 -- # return 0 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@111 -- # killprocess 3284874 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3284874 ']' 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3284874 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3284874 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3284874' 00:21:37.076 killing process with pid 3284874 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3284874 00:21:37.076 10:39:17 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3284874 00:21:37.335 Received shutdown signal, test time was about 0.897209 seconds 00:21:37.335 00:21:37.335 Latency(us) 00:21:37.335 [2024-11-20T09:39:18.066Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:37.335 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.335 Verification LBA range: start 0x0 length 0x400 00:21:37.335 Nvme1n1 : 0.89 287.13 17.95 0.00 0.00 220455.50 15978.30 216705.71 00:21:37.335 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.335 Verification LBA range: start 0x0 length 0x400 00:21:37.335 Nvme2n1 : 0.87 298.87 18.68 0.00 0.00 207232.49 3432.84 212711.13 00:21:37.335 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.335 Verification LBA range: start 0x0 length 0x400 00:21:37.335 Nvme3n1 : 0.87 300.55 18.78 0.00 0.00 201434.30 8488.47 217704.35 00:21:37.335 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.335 Verification LBA range: start 0x0 length 0x400 00:21:37.335 Nvme4n1 : 0.88 292.93 18.31 0.00 0.00 204309.49 1451.15 215707.06 00:21:37.335 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.335 Verification LBA range: start 0x0 length 0x400 00:21:37.335 Nvme5n1 : 0.88 291.90 18.24 0.00 0.00 201203.81 15042.07 213709.78 00:21:37.335 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.335 Verification LBA range: start 0x0 length 0x400 00:21:37.335 Nvme6n1 : 0.90 285.54 17.85 0.00 0.00 202207.09 30583.47 200727.41 00:21:37.335 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.335 Verification LBA range: start 0x0 length 0x400 00:21:37.335 Nvme7n1 : 0.89 288.17 18.01 0.00 0.00 196405.39 16227.96 217704.35 00:21:37.335 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.335 Verification LBA range: start 0x0 length 0x400 00:21:37.335 Nvme8n1 : 0.90 285.76 17.86 0.00 0.00 194244.75 14105.84 218702.99 00:21:37.335 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.335 Verification LBA range: start 0x0 length 0x400 00:21:37.335 Nvme9n1 : 0.86 222.55 13.91 0.00 0.00 242944.16 21221.18 218702.99 00:21:37.335 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:37.335 Verification LBA range: start 0x0 length 0x400 00:21:37.335 Nvme10n1 : 0.86 222.05 13.88 0.00 0.00 238756.17 18599.74 236678.58 00:21:37.335 [2024-11-20T09:39:18.066Z] =================================================================================================================== 00:21:37.335 [2024-11-20T09:39:18.066Z] Total : 2775.45 173.47 0.00 0.00 209317.34 1451.15 236678.58 00:21:37.335 10:39:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # sleep 1 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@115 -- # kill -0 3284595 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@117 -- # stoptarget 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@99 -- # sync 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@102 -- # set +e 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:38.710 rmmod nvme_tcp 00:21:38.710 rmmod nvme_fabrics 00:21:38.710 rmmod nvme_keyring 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@106 -- # set -e 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@107 -- # return 0 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@336 -- # '[' -n 3284595 ']' 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@337 -- # killprocess 3284595 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # '[' -z 3284595 ']' 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # kill -0 3284595 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # uname 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3284595 00:21:38.710 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:38.711 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:38.711 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3284595' 00:21:38.711 killing process with pid 3284595 00:21:38.711 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@973 -- # kill 3284595 00:21:38.711 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@978 -- # wait 3284595 00:21:38.970 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:38.970 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # nvmf_fini 00:21:38.970 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@264 -- # local dev 00:21:38.970 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@267 -- # remove_target_ns 00:21:38.970 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:38.970 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:38.970 10:39:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@130 -- # return 0 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@41 -- # _dev=0 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@41 -- # dev_map=() 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/setup.sh@284 -- # iptr 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@542 -- # iptables-save 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@542 -- # iptables-restore 00:21:40.872 00:21:40.872 real 0m8.111s 00:21:40.872 user 0m24.213s 00:21:40.872 sys 0m1.446s 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:40.872 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:40.872 ************************************ 00:21:40.872 END TEST nvmf_shutdown_tc2 00:21:40.872 ************************************ 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@164 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:41.132 ************************************ 00:21:41.132 START TEST nvmf_shutdown_tc3 00:21:41.132 ************************************ 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc3 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@122 -- # starttarget 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # remove_target_ns 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # xtrace_disable 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@131 -- # pci_devs=() 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@131 -- # local -a pci_devs 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@133 -- # pci_drivers=() 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@135 -- # net_devs=() 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@135 -- # local -ga net_devs 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@136 -- # e810=() 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@136 -- # local -ga e810 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@137 -- # x722=() 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@137 -- # local -ga x722 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@138 -- # mlx=() 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@138 -- # local -ga mlx 00:21:41.132 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:41.133 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:41.133 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:41.133 Found net devices under 0000:86:00.0: cvl_0_0 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:41.133 Found net devices under 0000:86:00.1: cvl_0_1 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@262 -- # is_hw=yes 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@257 -- # create_target_ns 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@28 -- # local -g _dev 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@44 -- # ips=() 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:21:41.133 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@11 -- # local val=167772161 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:21:41.134 10.0.0.1 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@11 -- # local val=167772162 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:21:41.134 10.0.0.2 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:21:41.134 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:21:41.393 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:21:41.393 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:41.393 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:41.393 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:21:41.393 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:21:41.393 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:21:41.393 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:21:41.393 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:21:41.393 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@38 -- # ping_ips 1 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:41.394 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:41.394 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.439 ms 00:21:41.394 00:21:41.394 --- 10.0.0.1 ping statistics --- 00:21:41.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.394 rtt min/avg/max/mdev = 0.439/0.439/0.439/0.000 ms 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=target0 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:21:41.394 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:41.394 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.148 ms 00:21:41.394 00:21:41.394 --- 10.0.0.2 ping statistics --- 00:21:41.394 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:41.394 rtt min/avg/max/mdev = 0.148/0.148/0.148/0.000 ms 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # return 0 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:21:41.394 10:39:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:21:41.394 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:21:41.394 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:41.394 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:41.394 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:41.394 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:41.394 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:41.394 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:41.394 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:41.394 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:41.394 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:41.394 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:41.394 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:41.394 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:41.394 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:41.394 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:41.394 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:41.394 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=initiator1 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # return 1 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev= 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@169 -- # return 0 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=target0 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@107 -- # local dev=target1 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@109 -- # return 1 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@168 -- # dev= 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@169 -- # return 0 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@328 -- # nvmfpid=3286033 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # waitforlisten 3286033 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3286033 ']' 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:41.395 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:41.395 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.654 [2024-11-20 10:39:22.149076] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:21:41.654 [2024-11-20 10:39:22.149122] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:41.654 [2024-11-20 10:39:22.226855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:41.654 [2024-11-20 10:39:22.266823] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:41.654 [2024-11-20 10:39:22.266860] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:41.654 [2024-11-20 10:39:22.266866] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:41.654 [2024-11-20 10:39:22.266872] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:41.654 [2024-11-20 10:39:22.266877] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:41.654 [2024-11-20 10:39:22.268473] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:41.654 [2024-11-20 10:39:22.268580] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:41.654 [2024-11-20 10:39:22.268688] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:41.654 [2024-11-20 10:39:22.268689] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:41.654 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:41.654 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:41.654 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:41.654 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:41.654 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.913 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:41.913 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:41.913 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.913 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.913 [2024-11-20 10:39:22.407641] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:41.913 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:41.913 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:41.913 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:41.913 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:41.913 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.913 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@29 -- # cat 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:41.914 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:41.914 Malloc1 00:21:41.914 [2024-11-20 10:39:22.514696] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:41.914 Malloc2 00:21:41.914 Malloc3 00:21:41.914 Malloc4 00:21:42.172 Malloc5 00:21:42.172 Malloc6 00:21:42.172 Malloc7 00:21:42.172 Malloc8 00:21:42.172 Malloc9 00:21:42.172 Malloc10 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # perfpid=3286215 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # waitforlisten 3286215 /var/tmp/bdevperf.sock 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@835 -- # '[' -z 3286215 ']' 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:42.431 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # config=() 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # local subsystem config 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:42.431 { 00:21:42.431 "params": { 00:21:42.431 "name": "Nvme$subsystem", 00:21:42.431 "trtype": "$TEST_TRANSPORT", 00:21:42.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.431 "adrfam": "ipv4", 00:21:42.431 "trsvcid": "$NVMF_PORT", 00:21:42.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.431 "hdgst": ${hdgst:-false}, 00:21:42.431 "ddgst": ${ddgst:-false} 00:21:42.431 }, 00:21:42.431 "method": "bdev_nvme_attach_controller" 00:21:42.431 } 00:21:42.431 EOF 00:21:42.431 )") 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:42.431 { 00:21:42.431 "params": { 00:21:42.431 "name": "Nvme$subsystem", 00:21:42.431 "trtype": "$TEST_TRANSPORT", 00:21:42.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.431 "adrfam": "ipv4", 00:21:42.431 "trsvcid": "$NVMF_PORT", 00:21:42.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.431 "hdgst": ${hdgst:-false}, 00:21:42.431 "ddgst": ${ddgst:-false} 00:21:42.431 }, 00:21:42.431 "method": "bdev_nvme_attach_controller" 00:21:42.431 } 00:21:42.431 EOF 00:21:42.431 )") 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:42.431 { 00:21:42.431 "params": { 00:21:42.431 "name": "Nvme$subsystem", 00:21:42.431 "trtype": "$TEST_TRANSPORT", 00:21:42.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.431 "adrfam": "ipv4", 00:21:42.431 "trsvcid": "$NVMF_PORT", 00:21:42.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.431 "hdgst": ${hdgst:-false}, 00:21:42.431 "ddgst": ${ddgst:-false} 00:21:42.431 }, 00:21:42.431 "method": "bdev_nvme_attach_controller" 00:21:42.431 } 00:21:42.431 EOF 00:21:42.431 )") 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:42.431 { 00:21:42.431 "params": { 00:21:42.431 "name": "Nvme$subsystem", 00:21:42.431 "trtype": "$TEST_TRANSPORT", 00:21:42.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.431 "adrfam": "ipv4", 00:21:42.431 "trsvcid": "$NVMF_PORT", 00:21:42.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.431 "hdgst": ${hdgst:-false}, 00:21:42.431 "ddgst": ${ddgst:-false} 00:21:42.431 }, 00:21:42.431 "method": "bdev_nvme_attach_controller" 00:21:42.431 } 00:21:42.431 EOF 00:21:42.431 )") 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:42.431 { 00:21:42.431 "params": { 00:21:42.431 "name": "Nvme$subsystem", 00:21:42.431 "trtype": "$TEST_TRANSPORT", 00:21:42.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.431 "adrfam": "ipv4", 00:21:42.431 "trsvcid": "$NVMF_PORT", 00:21:42.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.431 "hdgst": ${hdgst:-false}, 00:21:42.431 "ddgst": ${ddgst:-false} 00:21:42.431 }, 00:21:42.431 "method": "bdev_nvme_attach_controller" 00:21:42.431 } 00:21:42.431 EOF 00:21:42.431 )") 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:42.431 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:42.431 { 00:21:42.431 "params": { 00:21:42.431 "name": "Nvme$subsystem", 00:21:42.431 "trtype": "$TEST_TRANSPORT", 00:21:42.431 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.431 "adrfam": "ipv4", 00:21:42.431 "trsvcid": "$NVMF_PORT", 00:21:42.431 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.431 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.431 "hdgst": ${hdgst:-false}, 00:21:42.431 "ddgst": ${ddgst:-false} 00:21:42.431 }, 00:21:42.432 "method": "bdev_nvme_attach_controller" 00:21:42.432 } 00:21:42.432 EOF 00:21:42.432 )") 00:21:42.432 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:21:42.432 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:42.432 [2024-11-20 10:39:22.983860] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:21:42.432 [2024-11-20 10:39:22.983908] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3286215 ] 00:21:42.432 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:42.432 { 00:21:42.432 "params": { 00:21:42.432 "name": "Nvme$subsystem", 00:21:42.432 "trtype": "$TEST_TRANSPORT", 00:21:42.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.432 "adrfam": "ipv4", 00:21:42.432 "trsvcid": "$NVMF_PORT", 00:21:42.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.432 "hdgst": ${hdgst:-false}, 00:21:42.432 "ddgst": ${ddgst:-false} 00:21:42.432 }, 00:21:42.432 "method": "bdev_nvme_attach_controller" 00:21:42.432 } 00:21:42.432 EOF 00:21:42.432 )") 00:21:42.432 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:21:42.432 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:42.432 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:42.432 { 00:21:42.432 "params": { 00:21:42.432 "name": "Nvme$subsystem", 00:21:42.432 "trtype": "$TEST_TRANSPORT", 00:21:42.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.432 "adrfam": "ipv4", 00:21:42.432 "trsvcid": "$NVMF_PORT", 00:21:42.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.432 "hdgst": ${hdgst:-false}, 00:21:42.432 "ddgst": ${ddgst:-false} 00:21:42.432 }, 00:21:42.432 "method": "bdev_nvme_attach_controller" 00:21:42.432 } 00:21:42.432 EOF 00:21:42.432 )") 00:21:42.432 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:21:42.432 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:42.432 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:42.432 { 00:21:42.432 "params": { 00:21:42.432 "name": "Nvme$subsystem", 00:21:42.432 "trtype": "$TEST_TRANSPORT", 00:21:42.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.432 "adrfam": "ipv4", 00:21:42.432 "trsvcid": "$NVMF_PORT", 00:21:42.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.432 "hdgst": ${hdgst:-false}, 00:21:42.432 "ddgst": ${ddgst:-false} 00:21:42.432 }, 00:21:42.432 "method": "bdev_nvme_attach_controller" 00:21:42.432 } 00:21:42.432 EOF 00:21:42.432 )") 00:21:42.432 10:39:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:21:42.432 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:21:42.432 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:21:42.432 { 00:21:42.432 "params": { 00:21:42.432 "name": "Nvme$subsystem", 00:21:42.432 "trtype": "$TEST_TRANSPORT", 00:21:42.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:42.432 "adrfam": "ipv4", 00:21:42.432 "trsvcid": "$NVMF_PORT", 00:21:42.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:42.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:42.432 "hdgst": ${hdgst:-false}, 00:21:42.432 "ddgst": ${ddgst:-false} 00:21:42.432 }, 00:21:42.432 "method": "bdev_nvme_attach_controller" 00:21:42.432 } 00:21:42.432 EOF 00:21:42.432 )") 00:21:42.432 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # cat 00:21:42.432 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@396 -- # jq . 00:21:42.432 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@397 -- # IFS=, 00:21:42.432 10:39:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:21:42.432 "params": { 00:21:42.432 "name": "Nvme1", 00:21:42.432 "trtype": "tcp", 00:21:42.432 "traddr": "10.0.0.2", 00:21:42.432 "adrfam": "ipv4", 00:21:42.432 "trsvcid": "4420", 00:21:42.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:42.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:42.432 "hdgst": false, 00:21:42.432 "ddgst": false 00:21:42.432 }, 00:21:42.432 "method": "bdev_nvme_attach_controller" 00:21:42.432 },{ 00:21:42.432 "params": { 00:21:42.432 "name": "Nvme2", 00:21:42.432 "trtype": "tcp", 00:21:42.432 "traddr": "10.0.0.2", 00:21:42.432 "adrfam": "ipv4", 00:21:42.432 "trsvcid": "4420", 00:21:42.432 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:42.432 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:42.432 "hdgst": false, 00:21:42.432 "ddgst": false 00:21:42.432 }, 00:21:42.432 "method": "bdev_nvme_attach_controller" 00:21:42.432 },{ 00:21:42.432 "params": { 00:21:42.432 "name": "Nvme3", 00:21:42.432 "trtype": "tcp", 00:21:42.432 "traddr": "10.0.0.2", 00:21:42.432 "adrfam": "ipv4", 00:21:42.432 "trsvcid": "4420", 00:21:42.432 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:42.432 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:42.432 "hdgst": false, 00:21:42.432 "ddgst": false 00:21:42.432 }, 00:21:42.432 "method": "bdev_nvme_attach_controller" 00:21:42.432 },{ 00:21:42.432 "params": { 00:21:42.432 "name": "Nvme4", 00:21:42.432 "trtype": "tcp", 00:21:42.432 "traddr": "10.0.0.2", 00:21:42.432 "adrfam": "ipv4", 00:21:42.432 "trsvcid": "4420", 00:21:42.432 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:42.432 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:42.432 "hdgst": false, 00:21:42.432 "ddgst": false 00:21:42.432 }, 00:21:42.432 "method": "bdev_nvme_attach_controller" 00:21:42.432 },{ 00:21:42.432 "params": { 00:21:42.432 "name": "Nvme5", 00:21:42.432 "trtype": "tcp", 00:21:42.432 "traddr": "10.0.0.2", 00:21:42.432 "adrfam": "ipv4", 00:21:42.432 "trsvcid": "4420", 00:21:42.432 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:42.432 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:42.432 "hdgst": false, 00:21:42.432 "ddgst": false 00:21:42.432 }, 00:21:42.432 "method": "bdev_nvme_attach_controller" 00:21:42.432 },{ 00:21:42.432 "params": { 00:21:42.432 "name": "Nvme6", 00:21:42.432 "trtype": "tcp", 00:21:42.432 "traddr": "10.0.0.2", 00:21:42.432 "adrfam": "ipv4", 00:21:42.432 "trsvcid": "4420", 00:21:42.432 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:42.432 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:42.432 "hdgst": false, 00:21:42.432 "ddgst": false 00:21:42.432 }, 00:21:42.432 "method": "bdev_nvme_attach_controller" 00:21:42.432 },{ 00:21:42.432 "params": { 00:21:42.432 "name": "Nvme7", 00:21:42.432 "trtype": "tcp", 00:21:42.432 "traddr": "10.0.0.2", 00:21:42.432 "adrfam": "ipv4", 00:21:42.432 "trsvcid": "4420", 00:21:42.432 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:42.432 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:42.432 "hdgst": false, 00:21:42.432 "ddgst": false 00:21:42.432 }, 00:21:42.432 "method": "bdev_nvme_attach_controller" 00:21:42.432 },{ 00:21:42.432 "params": { 00:21:42.432 "name": "Nvme8", 00:21:42.432 "trtype": "tcp", 00:21:42.432 "traddr": "10.0.0.2", 00:21:42.432 "adrfam": "ipv4", 00:21:42.432 "trsvcid": "4420", 00:21:42.432 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:42.432 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:42.432 "hdgst": false, 00:21:42.432 "ddgst": false 00:21:42.432 }, 00:21:42.432 "method": "bdev_nvme_attach_controller" 00:21:42.432 },{ 00:21:42.432 "params": { 00:21:42.432 "name": "Nvme9", 00:21:42.432 "trtype": "tcp", 00:21:42.432 "traddr": "10.0.0.2", 00:21:42.432 "adrfam": "ipv4", 00:21:42.432 "trsvcid": "4420", 00:21:42.432 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:42.432 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:42.432 "hdgst": false, 00:21:42.432 "ddgst": false 00:21:42.432 }, 00:21:42.432 "method": "bdev_nvme_attach_controller" 00:21:42.432 },{ 00:21:42.432 "params": { 00:21:42.432 "name": "Nvme10", 00:21:42.432 "trtype": "tcp", 00:21:42.432 "traddr": "10.0.0.2", 00:21:42.432 "adrfam": "ipv4", 00:21:42.432 "trsvcid": "4420", 00:21:42.432 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:42.432 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:42.432 "hdgst": false, 00:21:42.432 "ddgst": false 00:21:42.432 }, 00:21:42.432 "method": "bdev_nvme_attach_controller" 00:21:42.432 }' 00:21:42.432 [2024-11-20 10:39:23.058336] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:42.432 [2024-11-20 10:39:23.099070] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:21:44.332 Running I/O for 10 seconds... 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@868 -- # return 0 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@128 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@131 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@133 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@51 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@55 -- # '[' -z Nvme1n1 ']' 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local ret=1 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # local i 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i = 10 )) 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=67 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 67 -ge 100 ']' 00:21:44.332 10:39:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@68 -- # sleep 0.25 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i-- )) 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # (( i != 0 )) 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # jq -r '.bdevs[0].num_read_ops' 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@61 -- # read_io_count=131 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # '[' 131 -ge 100 ']' 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # ret=0 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@66 -- # break 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@70 -- # return 0 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # killprocess 3286033 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3286033 ']' 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3286033 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # uname 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3286033 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3286033' 00:21:44.596 killing process with pid 3286033 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@973 -- # kill 3286033 00:21:44.596 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@978 -- # wait 3286033 00:21:44.596 [2024-11-20 10:39:25.295334] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972700 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.295389] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1972700 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296232] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296266] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296278] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296285] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296293] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296299] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296305] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296312] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296320] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296342] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296373] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296380] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296386] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296392] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296398] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296405] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296411] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296417] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296422] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296429] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296435] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296463] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296475] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296488] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296493] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296500] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296507] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296538] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296545] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296577] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296589] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296597] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296603] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296610] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296635] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.596 [2024-11-20 10:39:25.296641] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.296647] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.296653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.296659] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.296665] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea760 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.298941] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.298966] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.298974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.298980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.298988] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.298994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299001] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299007] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299032] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299038] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299044] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299052] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299058] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299065] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299071] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299078] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299084] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299091] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299108] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299114] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299133] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299153] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299159] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299165] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299171] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299178] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299185] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299191] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299197] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299209] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299215] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299221] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299227] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299234] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299240] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299246] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299253] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299259] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299265] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299283] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299290] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299296] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299302] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299308] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299314] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299319] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299326] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299338] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299344] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299350] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299356] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299362] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.299367] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19730c0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.300454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.300477] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.300485] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.300492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.300498] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.300504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.300510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.300517] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.300523] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.300529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.300534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.300541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.300551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.300557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.300563] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.597 [2024-11-20 10:39:25.300569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300642] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300666] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300678] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300685] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300691] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300697] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300772] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300789] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300795] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300801] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300843] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.300849] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19735b0 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301482] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301510] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301528] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301541] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301547] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301555] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301562] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301568] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301574] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301587] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301607] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301614] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301619] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301626] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301645] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301664] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301670] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301677] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301698] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301711] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301718] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301730] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.598 [2024-11-20 10:39:25.301736] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301749] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301756] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301763] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301769] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301782] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301788] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301794] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301800] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301806] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301818] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301824] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301830] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301835] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301841] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301847] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301853] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301859] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301872] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301878] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301884] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.301890] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973930 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302787] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302793] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302799] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302807] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302826] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302832] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302838] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302850] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302856] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302864] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302869] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302875] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302882] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302902] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302908] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302915] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302924] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302930] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302936] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302949] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302956] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302962] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302968] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302980] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.302994] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303011] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303017] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303024] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303030] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303037] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303043] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303050] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303056] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303063] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303069] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303076] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303082] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303089] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303102] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303113] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303127] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303132] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303139] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303145] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303151] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303157] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.599 [2024-11-20 10:39:25.303163] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.600 [2024-11-20 10:39:25.303170] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.600 [2024-11-20 10:39:25.303176] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.600 [2024-11-20 10:39:25.303182] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.600 [2024-11-20 10:39:25.303188] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1973e00 is same with the state(6) to be set 00:21:44.600 [2024-11-20 10:39:25.303339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303380] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303409] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd7310 is same with the state(6) to be set 00:21:44.600 [2024-11-20 10:39:25.303461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303477] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303509] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303522] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11790 is same with the state(6) to be set 00:21:44.600 [2024-11-20 10:39:25.303563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303580] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303593] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303606] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303620] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19aad50 is same with the state(6) to be set 00:21:44.600 [2024-11-20 10:39:25.303641] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303657] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303699] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a4370 is same with the state(6) to be set 00:21:44.600 [2024-11-20 10:39:25.303722] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303740] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303754] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303768] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303781] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd64d0 is same with the state(6) to be set 00:21:44.600 [2024-11-20 10:39:25.303801] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303816] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303830] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.600 [2024-11-20 10:39:25.303850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ab1b0 is same with the state(6) to be set 00:21:44.600 [2024-11-20 10:39:25.303904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.600 [2024-11-20 10:39:25.303913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.600 [2024-11-20 10:39:25.303934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.600 [2024-11-20 10:39:25.303950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.600 [2024-11-20 10:39:25.303965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.600 [2024-11-20 10:39:25.303973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.600 [2024-11-20 10:39:25.303979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.303990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304249] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304257] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304263] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:1[2024-11-20 10:39:25.304271] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 10:39:25.304280] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304289] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304295] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304303] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304310] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304318] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:1[2024-11-20 10:39:25.304327] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304336] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:1[2024-11-20 10:39:25.304343] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 10:39:25.304353] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304364] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with [2024-11-20 10:39:25.304364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:1the state(6) to be set 00:21:44.601 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304378] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304385] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304400] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 10:39:25.304407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304415] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304421] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with [2024-11-20 10:39:25.304424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:44.601 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.601 [2024-11-20 10:39:25.304433] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.601 [2024-11-20 10:39:25.304436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.601 [2024-11-20 10:39:25.304441] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-11-20 10:39:25.304447] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 [2024-11-20 10:39:25.304454] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-11-20 10:39:25.304462] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304469] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 [2024-11-20 10:39:25.304476] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-11-20 10:39:25.304483] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 [2024-11-20 10:39:25.304492] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-11-20 10:39:25.304502] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 [2024-11-20 10:39:25.304509] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-11-20 10:39:25.304516] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 [2024-11-20 10:39:25.304524] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-11-20 10:39:25.304532] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:1[2024-11-20 10:39:25.304539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 10:39:25.304549] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304558] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 [2024-11-20 10:39:25.304565] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-11-20 10:39:25.304572] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 [2024-11-20 10:39:25.304579] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-11-20 10:39:25.304585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 [2024-11-20 10:39:25.304593] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-11-20 10:39:25.304600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304608] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with [2024-11-20 10:39:25.304608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:1the state(6) to be set 00:21:44.602 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 [2024-11-20 10:39:25.304615] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-11-20 10:39:25.304623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 [2024-11-20 10:39:25.304630] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-11-20 10:39:25.304639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 [2024-11-20 10:39:25.304646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-11-20 10:39:25.304654] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304660] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with [2024-11-20 10:39:25.304660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:1the state(6) to be set 00:21:44.602 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 [2024-11-20 10:39:25.304669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-11-20 10:39:25.304676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 [2024-11-20 10:39:25.304683] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-11-20 10:39:25.304690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:1[2024-11-20 10:39:25.304696] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 10:39:25.304704] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304713] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 [2024-11-20 10:39:25.304720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19742d0 is same with the state(6) to be set 00:21:44.602 [2024-11-20 10:39:25.304723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-11-20 10:39:25.304732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 [2024-11-20 10:39:25.304739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-11-20 10:39:25.304746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 [2024-11-20 10:39:25.304752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-11-20 10:39:25.304762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 [2024-11-20 10:39:25.304768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-11-20 10:39:25.304776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 [2024-11-20 10:39:25.304782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.602 [2024-11-20 10:39:25.304790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.602 [2024-11-20 10:39:25.304798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.304806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.304812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.304820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.304826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.304834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.304840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.304848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.304855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.304862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.304868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.304877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.304883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.304891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.304897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.304905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.304911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.304919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.304925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.304933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.304941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.603 [2024-11-20 10:39:25.305473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.603 [2024-11-20 10:39:25.305481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.604 [2024-11-20 10:39:25.305487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.604 [2024-11-20 10:39:25.305495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.604 [2024-11-20 10:39:25.305501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.604 [2024-11-20 10:39:25.305504] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with [2024-11-20 10:39:25.305510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:1the state(6) to be set 00:21:44.604 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.604 [2024-11-20 10:39:25.305518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.604 [2024-11-20 10:39:25.305520] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305527] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.604 [2024-11-20 10:39:25.305536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with [2024-11-20 10:39:25.305536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:44.604 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.604 [2024-11-20 10:39:25.305546] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.604 [2024-11-20 10:39:25.305553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 c[2024-11-20 10:39:25.305560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.604 the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305569] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.604 [2024-11-20 10:39:25.305575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.604 [2024-11-20 10:39:25.305583] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.604 [2024-11-20 10:39:25.305590] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.604 [2024-11-20 10:39:25.305596] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.604 [2024-11-20 10:39:25.305604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.604 [2024-11-20 10:39:25.305611] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.604 [2024-11-20 10:39:25.305624] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.604 [2024-11-20 10:39:25.305632] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.604 [2024-11-20 10:39:25.305639] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.604 [2024-11-20 10:39:25.305646] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305653] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with [2024-11-20 10:39:25.305653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:1the state(6) to be set 00:21:44.604 28 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.604 [2024-11-20 10:39:25.305662] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with [2024-11-20 10:39:25.305663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cthe state(6) to be set 00:21:44.604 dw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.604 [2024-11-20 10:39:25.305674] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.604 [2024-11-20 10:39:25.305681] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.604 [2024-11-20 10:39:25.305688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.604 [2024-11-20 10:39:25.305695] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.604 [2024-11-20 10:39:25.305703] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.604 [2024-11-20 10:39:25.305709] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.604 [2024-11-20 10:39:25.305717] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305724] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.604 [2024-11-20 10:39:25.305731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.604 [2024-11-20 10:39:25.305738] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.604 [2024-11-20 10:39:25.305745] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.604 [2024-11-20 10:39:25.305751] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.604 [2024-11-20 10:39:25.305761] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.604 [2024-11-20 10:39:25.305768] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305776] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.604 [2024-11-20 10:39:25.305783] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.604 [2024-11-20 10:39:25.305790] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.604 [2024-11-20 10:39:25.305798] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.604 [2024-11-20 10:39:25.305801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.604 [2024-11-20 10:39:25.305805] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.305811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.605 [2024-11-20 10:39:25.305812] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.305819] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.305827] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.305833] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.305839] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.305844] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.305851] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.305857] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.305865] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.305871] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.305877] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.305883] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.305889] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.305894] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.305900] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.305906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.305914] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.305919] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.305925] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19747c0 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306522] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306536] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306550] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306564] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306575] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306582] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306588] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306594] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306600] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306606] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306618] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306623] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306629] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306636] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306651] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306690] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306731] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306813] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306854] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306895] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306937] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.306978] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307019] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307060] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307104] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307235] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307277] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307317] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307357] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307407] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307448] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307489] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307530] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307571] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307612] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307657] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307699] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307739] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307781] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307821] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307863] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307903] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307943] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.307986] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.308026] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.308067] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.308112] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.308158] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.308200] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.308251] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.308292] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.308335] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.308375] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.605 [2024-11-20 10:39:25.308416] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.606 [2024-11-20 10:39:25.308456] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.606 [2024-11-20 10:39:25.308497] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.606 [2024-11-20 10:39:25.308539] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1aea290 is same with the state(6) to be set 00:21:44.877 [2024-11-20 10:39:25.319839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.319875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.319886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.319898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.319907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.319918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.319927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.319938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.319946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.319958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.319967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.319979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.319988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.319999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.320985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.320996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.321005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.321015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.321024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.321034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.321043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.321054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.321062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.321073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.321082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.321093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.321102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.877 [2024-11-20 10:39:25.321112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.877 [2024-11-20 10:39:25.321120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.321684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.878 [2024-11-20 10:39:25.321693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.322301] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd7310 (9): Bad file descriptor 00:21:44.878 [2024-11-20 10:39:25.322355] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.878 [2024-11-20 10:39:25.322368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.322379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.878 [2024-11-20 10:39:25.322387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.322397] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.878 [2024-11-20 10:39:25.322410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.322419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.878 [2024-11-20 10:39:25.322428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.322436] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1050 is same with the state(6) to be set 00:21:44.878 [2024-11-20 10:39:25.322458] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e11790 (9): Bad file descriptor 00:21:44.878 [2024-11-20 10:39:25.322486] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.878 [2024-11-20 10:39:25.322497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.322507] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.878 [2024-11-20 10:39:25.322515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.322525] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.878 [2024-11-20 10:39:25.322533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.322543] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.878 [2024-11-20 10:39:25.322552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.322560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04e00 is same with the state(6) to be set 00:21:44.878 [2024-11-20 10:39:25.322591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.878 [2024-11-20 10:39:25.322602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.322611] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.878 [2024-11-20 10:39:25.322620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.322630] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.878 [2024-11-20 10:39:25.322638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.322648] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.878 [2024-11-20 10:39:25.322656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.322664] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcc010 is same with the state(6) to be set 00:21:44.878 [2024-11-20 10:39:25.322684] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19aad50 (9): Bad file descriptor 00:21:44.878 [2024-11-20 10:39:25.322699] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a4370 (9): Bad file descriptor 00:21:44.878 [2024-11-20 10:39:25.322718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd64d0 (9): Bad file descriptor 00:21:44.878 [2024-11-20 10:39:25.322739] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ab1b0 (9): Bad file descriptor 00:21:44.878 [2024-11-20 10:39:25.322766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.878 [2024-11-20 10:39:25.322777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.322786] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.878 [2024-11-20 10:39:25.322795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.322804] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.878 [2024-11-20 10:39:25.322812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.322821] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:44.878 [2024-11-20 10:39:25.322830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.878 [2024-11-20 10:39:25.322839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13000 is same with the state(6) to be set 00:21:44.878 [2024-11-20 10:39:25.327627] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:44.878 [2024-11-20 10:39:25.327664] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:44.878 [2024-11-20 10:39:25.327681] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:44.878 [2024-11-20 10:39:25.328381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.878 [2024-11-20 10:39:25.328414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ab1b0 with addr=10.0.0.2, port=4420 00:21:44.878 [2024-11-20 10:39:25.328429] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ab1b0 is same with the state(6) to be set 00:21:44.878 [2024-11-20 10:39:25.328726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.878 [2024-11-20 10:39:25.328747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19aad50 with addr=10.0.0.2, port=4420 00:21:44.878 [2024-11-20 10:39:25.328760] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19aad50 is same with the state(6) to be set 00:21:44.878 [2024-11-20 10:39:25.328921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.878 [2024-11-20 10:39:25.328939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd7310 with addr=10.0.0.2, port=4420 00:21:44.878 [2024-11-20 10:39:25.328952] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd7310 is same with the state(6) to be set 00:21:44.878 [2024-11-20 10:39:25.329794] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:44.878 [2024-11-20 10:39:25.330252] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:44.878 [2024-11-20 10:39:25.330561] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:44.878 [2024-11-20 10:39:25.330593] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ab1b0 (9): Bad file descriptor 00:21:44.878 [2024-11-20 10:39:25.330613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19aad50 (9): Bad file descriptor 00:21:44.878 [2024-11-20 10:39:25.330629] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd7310 (9): Bad file descriptor 00:21:44.878 [2024-11-20 10:39:25.330766] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:44.878 [2024-11-20 10:39:25.330831] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:44.878 [2024-11-20 10:39:25.330902] nvme_tcp.c:1184:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:44.879 [2024-11-20 10:39:25.330936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:44.879 [2024-11-20 10:39:25.330952] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:44.879 [2024-11-20 10:39:25.330966] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:44.879 [2024-11-20 10:39:25.330979] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:44.879 [2024-11-20 10:39:25.330993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:44.879 [2024-11-20 10:39:25.331004] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:44.879 [2024-11-20 10:39:25.331015] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:44.879 [2024-11-20 10:39:25.331026] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:44.879 [2024-11-20 10:39:25.331038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:44.879 [2024-11-20 10:39:25.331049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:44.879 [2024-11-20 10:39:25.331060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:44.879 [2024-11-20 10:39:25.331071] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:44.879 [2024-11-20 10:39:25.331190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.331987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.331999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.879 [2024-11-20 10:39:25.332574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.879 [2024-11-20 10:39:25.332586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.332601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.332614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.332628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.332640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.332655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.332668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.332684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.332695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.332711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.332723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.332737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.332749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.332765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.332779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.332794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.332806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.332821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.332833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.332848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.332860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.332876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.332888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.332903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.332914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.332930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.332941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.332956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.332969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.332982] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x286d010 is same with the state(6) to be set 00:21:44.880 [2024-11-20 10:39:25.333119] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a1050 (9): Bad file descriptor 00:21:44.880 [2024-11-20 10:39:25.333162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e04e00 (9): Bad file descriptor 00:21:44.880 [2024-11-20 10:39:25.333189] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcc010 (9): Bad file descriptor 00:21:44.880 [2024-11-20 10:39:25.333244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e13000 (9): Bad file descriptor 00:21:44.880 [2024-11-20 10:39:25.335019] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:44.880 [2024-11-20 10:39:25.335137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.880 [2024-11-20 10:39:25.335941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.880 [2024-11-20 10:39:25.335949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.335959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.335967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.335976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.335984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.335994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.336493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.336502] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d985d0 is same with the state(6) to be set 00:21:44.881 [2024-11-20 10:39:25.337746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.337764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.337778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.337786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.337797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.337805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.337815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.337823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.337833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.337842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.337851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.337860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.337870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.337878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.337888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.337896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.337906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.337914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.337924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.337932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.337942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.337950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.337960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.337968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.337978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.337986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.337998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.338006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.338016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.338024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.338034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.338041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.338051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.338059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.338069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.338077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.338087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.338095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.338105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.338113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.338123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.338131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.338142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.338149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.338160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.338169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.338179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.338187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.881 [2024-11-20 10:39:25.338197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.881 [2024-11-20 10:39:25.338209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.338917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.338926] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1db22f0 is same with the state(6) to be set 00:21:44.882 [2024-11-20 10:39:25.340179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.340195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.340219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.340228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.340238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.340246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.340257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.340265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.340275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.340283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.340293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.340301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.340311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.340319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.340329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.340338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.340348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.340356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.340366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.340373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.340384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.340392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.340401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.340413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.340423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.340431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.340442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.340449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.340459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.340467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.340478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.340486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.340495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.340503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.882 [2024-11-20 10:39:25.340513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.882 [2024-11-20 10:39:25.340521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.340991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.340999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.883 [2024-11-20 10:39:25.341369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.883 [2024-11-20 10:39:25.341378] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1c2ce90 is same with the state(6) to be set 00:21:44.883 [2024-11-20 10:39:25.342556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:44.883 [2024-11-20 10:39:25.342575] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:44.883 [2024-11-20 10:39:25.342586] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:44.883 [2024-11-20 10:39:25.342864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.883 [2024-11-20 10:39:25.342882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e13000 with addr=10.0.0.2, port=4420 00:21:44.883 [2024-11-20 10:39:25.342892] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13000 is same with the state(6) to be set 00:21:44.883 [2024-11-20 10:39:25.343492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.884 [2024-11-20 10:39:25.343511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a4370 with addr=10.0.0.2, port=4420 00:21:44.884 [2024-11-20 10:39:25.343521] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a4370 is same with the state(6) to be set 00:21:44.884 [2024-11-20 10:39:25.343769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.884 [2024-11-20 10:39:25.343783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd64d0 with addr=10.0.0.2, port=4420 00:21:44.884 [2024-11-20 10:39:25.343791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd64d0 is same with the state(6) to be set 00:21:44.884 [2024-11-20 10:39:25.344019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.884 [2024-11-20 10:39:25.344032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e11790 with addr=10.0.0.2, port=4420 00:21:44.884 [2024-11-20 10:39:25.344040] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11790 is same with the state(6) to be set 00:21:44.884 [2024-11-20 10:39:25.344052] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e13000 (9): Bad file descriptor 00:21:44.884 [2024-11-20 10:39:25.344905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] resetting controller 00:21:44.884 [2024-11-20 10:39:25.344926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] resetting controller 00:21:44.884 [2024-11-20 10:39:25.344938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:21:44.884 [2024-11-20 10:39:25.344981] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a4370 (9): Bad file descriptor 00:21:44.884 [2024-11-20 10:39:25.344993] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd64d0 (9): Bad file descriptor 00:21:44.884 [2024-11-20 10:39:25.345003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e11790 (9): Bad file descriptor 00:21:44.884 [2024-11-20 10:39:25.345013] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:44.884 [2024-11-20 10:39:25.345025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:44.884 [2024-11-20 10:39:25.345036] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:44.884 [2024-11-20 10:39:25.345045] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:44.884 [2024-11-20 10:39:25.345112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.884 [2024-11-20 10:39:25.345906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.884 [2024-11-20 10:39:25.345913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.345921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.345928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.345936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.345943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.345951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.345957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.345965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.345972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.345980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.345987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.345995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.346001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.346009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.346015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.346023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.346029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.346037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.346044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.346052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.346059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.346067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.346073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.346081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.346088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.346096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.346102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.346111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.346118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.346126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.346133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.346141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.346147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.346155] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x285eb40 is same with the state(6) to be set 00:21:44.885 [2024-11-20 10:39:25.347143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.885 [2024-11-20 10:39:25.347721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.885 [2024-11-20 10:39:25.347728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.347736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.347742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.347750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.347757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.347765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.347771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.347779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.347786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.347795] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.347801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.347809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.347816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.347824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.347830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.347838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.347845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.347855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.347861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.347869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.347876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.347884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.347891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.347899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.347905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.347913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.347919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.347927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.347934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.347942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.347948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.347956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.347963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.347971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.347977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.347985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.347991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.348000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.348006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.348014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.348020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.348028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.348036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.348044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.348051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.348059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.348065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.348073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.348079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.348087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.348093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.348101] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x286be40 is same with the state(6) to be set 00:21:44.886 [2024-11-20 10:39:25.349107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.886 [2024-11-20 10:39:25.349438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.886 [2024-11-20 10:39:25.349444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.349991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.349998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.350005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.350012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.350020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.350026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.350034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.350041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.350049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:44.887 [2024-11-20 10:39:25.350055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:44.887 [2024-11-20 10:39:25.350062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x286e460 is same with the state(6) to be set 00:21:44.887 [2024-11-20 10:39:25.351029] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6, 1] resetting controller 00:21:44.887 [2024-11-20 10:39:25.351045] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7, 1] resetting controller 00:21:44.887 task offset: 24576 on job bdev=Nvme1n1 fails 00:21:44.887 00:21:44.887 Latency(us) 00:21:44.887 [2024-11-20T09:39:25.618Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:44.887 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.887 Job: Nvme1n1 ended in about 0.77 seconds with error 00:21:44.887 Verification LBA range: start 0x0 length 0x400 00:21:44.887 Nvme1n1 : 0.77 247.76 15.48 82.59 0.00 191389.74 16227.96 213709.78 00:21:44.887 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.887 Job: Nvme2n1 ended in about 0.78 seconds with error 00:21:44.887 Verification LBA range: start 0x0 length 0x400 00:21:44.887 Nvme2n1 : 0.78 247.38 15.46 82.46 0.00 187764.54 16602.45 215707.06 00:21:44.887 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.887 Job: Nvme3n1 ended in about 0.79 seconds with error 00:21:44.887 Verification LBA range: start 0x0 length 0x400 00:21:44.887 Nvme3n1 : 0.79 243.47 15.22 81.16 0.00 187028.60 14230.67 220700.28 00:21:44.887 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.887 Job: Nvme4n1 ended in about 0.78 seconds with error 00:21:44.887 Verification LBA range: start 0x0 length 0x400 00:21:44.887 Nvme4n1 : 0.78 246.85 15.43 82.28 0.00 180459.52 22843.98 203723.34 00:21:44.887 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.887 Job: Nvme5n1 ended in about 0.79 seconds with error 00:21:44.887 Verification LBA range: start 0x0 length 0x400 00:21:44.887 Nvme5n1 : 0.79 161.82 10.11 80.91 0.00 240028.04 15915.89 218702.99 00:21:44.887 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.887 Job: Nvme6n1 ended in about 0.80 seconds with error 00:21:44.887 Verification LBA range: start 0x0 length 0x400 00:21:44.887 Nvme6n1 : 0.80 160.37 10.02 80.18 0.00 237287.78 18599.74 216705.71 00:21:44.888 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.888 Job: Nvme7n1 ended in about 0.80 seconds with error 00:21:44.888 Verification LBA range: start 0x0 length 0x400 00:21:44.888 Nvme7n1 : 0.80 159.98 10.00 79.99 0.00 232792.42 14542.75 231685.36 00:21:44.888 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.888 Job: Nvme8n1 ended in about 0.79 seconds with error 00:21:44.888 Verification LBA range: start 0x0 length 0x400 00:21:44.888 Nvme8n1 : 0.79 168.04 10.50 81.47 0.00 218200.81 12607.88 218702.99 00:21:44.888 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.888 Job: Nvme9n1 ended in about 0.80 seconds with error 00:21:44.888 Verification LBA range: start 0x0 length 0x400 00:21:44.888 Nvme9n1 : 0.80 164.58 10.29 79.80 0.00 218597.35 6054.28 221698.93 00:21:44.888 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:44.888 Job: Nvme10n1 ended in about 0.79 seconds with error 00:21:44.888 Verification LBA range: start 0x0 length 0x400 00:21:44.888 Nvme10n1 : 0.79 166.36 10.40 80.66 0.00 210802.69 16352.79 235679.94 00:21:44.888 [2024-11-20T09:39:25.619Z] =================================================================================================================== 00:21:44.888 [2024-11-20T09:39:25.619Z] Total : 1966.62 122.91 811.51 0.00 207683.27 6054.28 235679.94 00:21:44.888 [2024-11-20 10:39:25.384814] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:44.888 [2024-11-20 10:39:25.384865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9, 1] resetting controller 00:21:44.888 [2024-11-20 10:39:25.385166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.888 [2024-11-20 10:39:25.385184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd7310 with addr=10.0.0.2, port=4420 00:21:44.888 [2024-11-20 10:39:25.385196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd7310 is same with the state(6) to be set 00:21:44.888 [2024-11-20 10:39:25.385352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.888 [2024-11-20 10:39:25.385363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19aad50 with addr=10.0.0.2, port=4420 00:21:44.888 [2024-11-20 10:39:25.385370] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19aad50 is same with the state(6) to be set 00:21:44.888 [2024-11-20 10:39:25.385535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.888 [2024-11-20 10:39:25.385546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19ab1b0 with addr=10.0.0.2, port=4420 00:21:44.888 [2024-11-20 10:39:25.385553] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19ab1b0 is same with the state(6) to be set 00:21:44.888 [2024-11-20 10:39:25.385561] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:44.888 [2024-11-20 10:39:25.385567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:44.888 [2024-11-20 10:39:25.385577] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:44.888 [2024-11-20 10:39:25.385586] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:44.888 [2024-11-20 10:39:25.385594] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:44.888 [2024-11-20 10:39:25.385600] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:44.888 [2024-11-20 10:39:25.385613] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:44.888 [2024-11-20 10:39:25.385619] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:44.888 [2024-11-20 10:39:25.385626] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:44.888 [2024-11-20 10:39:25.385631] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:44.888 [2024-11-20 10:39:25.385637] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:44.888 [2024-11-20 10:39:25.385643] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:44.888 [2024-11-20 10:39:25.386000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.888 [2024-11-20 10:39:25.386015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dcc010 with addr=10.0.0.2, port=4420 00:21:44.888 [2024-11-20 10:39:25.386023] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dcc010 is same with the state(6) to be set 00:21:44.888 [2024-11-20 10:39:25.386243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.888 [2024-11-20 10:39:25.386255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a1050 with addr=10.0.0.2, port=4420 00:21:44.888 [2024-11-20 10:39:25.386262] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a1050 is same with the state(6) to be set 00:21:44.888 [2024-11-20 10:39:25.386402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.888 [2024-11-20 10:39:25.386413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e04e00 with addr=10.0.0.2, port=4420 00:21:44.888 [2024-11-20 10:39:25.386420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e04e00 is same with the state(6) to be set 00:21:44.888 [2024-11-20 10:39:25.386434] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd7310 (9): Bad file descriptor 00:21:44.888 [2024-11-20 10:39:25.386446] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19aad50 (9): Bad file descriptor 00:21:44.888 [2024-11-20 10:39:25.386455] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19ab1b0 (9): Bad file descriptor 00:21:44.888 [2024-11-20 10:39:25.386492] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Unable to perform failover, already in progress. 00:21:44.888 [2024-11-20 10:39:25.386502] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode2, 1] Unable to perform failover, already in progress. 00:21:44.888 [2024-11-20 10:39:25.386512] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode4, 1] Unable to perform failover, already in progress. 00:21:44.888 [2024-11-20 10:39:25.386532] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] Unable to perform failover, already in progress. 00:21:44.888 [2024-11-20 10:39:25.387217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8, 1] resetting controller 00:21:44.888 [2024-11-20 10:39:25.387261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dcc010 (9): Bad file descriptor 00:21:44.888 [2024-11-20 10:39:25.387272] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a1050 (9): Bad file descriptor 00:21:44.888 [2024-11-20 10:39:25.387282] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e04e00 (9): Bad file descriptor 00:21:44.888 [2024-11-20 10:39:25.387290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Ctrlr is in error state 00:21:44.888 [2024-11-20 10:39:25.387296] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] controller reinitialization failed 00:21:44.888 [2024-11-20 10:39:25.387308] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] in failed state. 00:21:44.888 [2024-11-20 10:39:25.387314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] Resetting controller failed. 00:21:44.888 [2024-11-20 10:39:25.387322] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Ctrlr is in error state 00:21:44.888 [2024-11-20 10:39:25.387327] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] controller reinitialization failed 00:21:44.888 [2024-11-20 10:39:25.387333] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] in failed state. 00:21:44.888 [2024-11-20 10:39:25.387339] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] Resetting controller failed. 00:21:44.888 [2024-11-20 10:39:25.387345] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Ctrlr is in error state 00:21:44.888 [2024-11-20 10:39:25.387351] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] controller reinitialization failed 00:21:44.888 [2024-11-20 10:39:25.387357] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:21:44.888 [2024-11-20 10:39:25.387363] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller failed. 00:21:44.888 [2024-11-20 10:39:25.387408] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10, 1] resetting controller 00:21:44.888 [2024-11-20 10:39:25.387418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5, 1] resetting controller 00:21:44.888 [2024-11-20 10:39:25.387426] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3, 1] resetting controller 00:21:44.888 [2024-11-20 10:39:25.387687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.888 [2024-11-20 10:39:25.387700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e13000 with addr=10.0.0.2, port=4420 00:21:44.888 [2024-11-20 10:39:25.387707] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e13000 is same with the state(6) to be set 00:21:44.888 [2024-11-20 10:39:25.387713] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Ctrlr is in error state 00:21:44.888 [2024-11-20 10:39:25.387719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] controller reinitialization failed 00:21:44.888 [2024-11-20 10:39:25.387726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] in failed state. 00:21:44.888 [2024-11-20 10:39:25.387732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] Resetting controller failed. 00:21:44.888 [2024-11-20 10:39:25.387738] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Ctrlr is in error state 00:21:44.888 [2024-11-20 10:39:25.387744] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] controller reinitialization failed 00:21:44.888 [2024-11-20 10:39:25.387750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] in failed state. 00:21:44.888 [2024-11-20 10:39:25.387755] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] Resetting controller failed. 00:21:44.888 [2024-11-20 10:39:25.387762] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Ctrlr is in error state 00:21:44.888 [2024-11-20 10:39:25.387767] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] controller reinitialization failed 00:21:44.888 [2024-11-20 10:39:25.387773] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] in failed state. 00:21:44.888 [2024-11-20 10:39:25.387779] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] Resetting controller failed. 00:21:44.888 [2024-11-20 10:39:25.388064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.888 [2024-11-20 10:39:25.388078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1e11790 with addr=10.0.0.2, port=4420 00:21:44.888 [2024-11-20 10:39:25.388085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1e11790 is same with the state(6) to be set 00:21:44.888 [2024-11-20 10:39:25.388247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.888 [2024-11-20 10:39:25.388257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1dd64d0 with addr=10.0.0.2, port=4420 00:21:44.888 [2024-11-20 10:39:25.388264] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1dd64d0 is same with the state(6) to be set 00:21:44.888 [2024-11-20 10:39:25.388518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:44.888 [2024-11-20 10:39:25.388528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19a4370 with addr=10.0.0.2, port=4420 00:21:44.888 [2024-11-20 10:39:25.388535] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19a4370 is same with the state(6) to be set 00:21:44.888 [2024-11-20 10:39:25.388543] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e13000 (9): Bad file descriptor 00:21:44.888 [2024-11-20 10:39:25.388570] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e11790 (9): Bad file descriptor 00:21:44.888 [2024-11-20 10:39:25.388579] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1dd64d0 (9): Bad file descriptor 00:21:44.888 [2024-11-20 10:39:25.388588] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19a4370 (9): Bad file descriptor 00:21:44.888 [2024-11-20 10:39:25.388595] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Ctrlr is in error state 00:21:44.888 [2024-11-20 10:39:25.388601] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] controller reinitialization failed 00:21:44.888 [2024-11-20 10:39:25.388607] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] in failed state. 00:21:44.888 [2024-11-20 10:39:25.388613] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] Resetting controller failed. 00:21:44.888 [2024-11-20 10:39:25.388635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Ctrlr is in error state 00:21:44.888 [2024-11-20 10:39:25.388641] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] controller reinitialization failed 00:21:44.888 [2024-11-20 10:39:25.388647] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] in failed state. 00:21:44.888 [2024-11-20 10:39:25.388653] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] Resetting controller failed. 00:21:44.888 [2024-11-20 10:39:25.388660] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Ctrlr is in error state 00:21:44.888 [2024-11-20 10:39:25.388665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] controller reinitialization failed 00:21:44.888 [2024-11-20 10:39:25.388671] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] in failed state. 00:21:44.888 [2024-11-20 10:39:25.388677] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] Resetting controller failed. 00:21:44.888 [2024-11-20 10:39:25.388683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Ctrlr is in error state 00:21:44.888 [2024-11-20 10:39:25.388689] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] controller reinitialization failed 00:21:44.888 [2024-11-20 10:39:25.388695] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] in failed state. 00:21:44.888 [2024-11-20 10:39:25.388701] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] Resetting controller failed. 00:21:45.202 10:39:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@137 -- # sleep 1 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@138 -- # NOT wait 3286215 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@652 -- # local es=0 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3286215 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # type -t wait 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # wait 3286215 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@655 -- # es=255 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@664 -- # es=127 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@665 -- # case "$es" in 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@672 -- # es=1 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@140 -- # stoptarget 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@99 -- # sync 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@102 -- # set +e 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:46.156 rmmod nvme_tcp 00:21:46.156 rmmod nvme_fabrics 00:21:46.156 rmmod nvme_keyring 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@106 -- # set -e 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@107 -- # return 0 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@336 -- # '[' -n 3286033 ']' 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@337 -- # killprocess 3286033 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # '[' -z 3286033 ']' 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # kill -0 3286033 00:21:46.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3286033) - No such process 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3286033 is not found' 00:21:46.156 Process with pid 3286033 is not found 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # nvmf_fini 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@264 -- # local dev 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@267 -- # remove_target_ns 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:46.156 10:39:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@130 -- # return 0 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@41 -- # _dev=0 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@41 -- # dev_map=() 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/setup.sh@284 -- # iptr 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@542 -- # iptables-save 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@542 -- # iptables-restore 00:21:48.693 00:21:48.693 real 0m7.191s 00:21:48.693 user 0m16.199s 00:21:48.693 sys 0m1.320s 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:48.693 ************************************ 00:21:48.693 END TEST nvmf_shutdown_tc3 00:21:48.693 ************************************ 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ e810 == \e\8\1\0 ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@166 -- # [[ tcp == \r\d\m\a ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@167 -- # run_test nvmf_shutdown_tc4 nvmf_shutdown_tc4 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:48.693 ************************************ 00:21:48.693 START TEST nvmf_shutdown_tc4 00:21:48.693 ************************************ 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1129 -- # nvmf_shutdown_tc4 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@145 -- # starttarget 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@16 -- # nvmftestinit 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@260 -- # remove_target_ns 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@125 -- # xtrace_disable 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@131 -- # pci_devs=() 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@131 -- # local -a pci_devs 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@132 -- # pci_net_devs=() 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@133 -- # pci_drivers=() 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@133 -- # local -A pci_drivers 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@135 -- # net_devs=() 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@135 -- # local -ga net_devs 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@136 -- # e810=() 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@136 -- # local -ga e810 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@137 -- # x722=() 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@137 -- # local -ga x722 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@138 -- # mlx=() 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@138 -- # local -ga mlx 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:48.693 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:48.693 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:48.693 Found net devices under 0000:86:00.0: cvl_0_0 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@234 -- # [[ up == up ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:48.693 Found net devices under 0000:86:00.1: cvl_0_1 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@262 -- # is_hw=yes 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@257 -- # create_target_ns 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@27 -- # local -gA dev_map 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@28 -- # local -g _dev 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@44 -- # ips=() 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:21:48.693 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:21:48.694 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:21:48.694 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:21:48.694 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:21:48.694 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:21:48.694 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:21:48.694 10:39:28 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@11 -- # local val=167772161 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:21:48.694 10.0.0.1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@11 -- # local val=167772162 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:21:48.694 10.0.0.2 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@38 -- # ping_ips 1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:21:48.694 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:48.694 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.437 ms 00:21:48.694 00:21:48.694 --- 10.0.0.1 ping statistics --- 00:21:48.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.694 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=target0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:21:48.694 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:48.694 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.152 ms 00:21:48.694 00:21:48.694 --- 10.0.0.2 ping statistics --- 00:21:48.694 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:48.694 rtt min/avg/max/mdev = 0.152/0.152/0.152/0.000 ms 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair++ )) 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@270 -- # return 0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=initiator0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=initiator1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # return 1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev= 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@169 -- # return 0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev target0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=target0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # get_net_dev target1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@107 -- # local dev=target1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@109 -- # return 1 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@168 -- # dev= 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@169 -- # return 0 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@19 -- # nvmfappstart -m 0x1E 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@328 -- # nvmfpid=3287437 00:21:48.694 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@329 -- # waitforlisten 3287437 00:21:48.695 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:48.695 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@835 -- # '[' -z 3287437 ']' 00:21:48.695 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:48.695 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@840 -- # local max_retries=100 00:21:48.695 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:48.695 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:48.695 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@844 -- # xtrace_disable 00:21:48.695 10:39:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:48.953 [2024-11-20 10:39:29.438560] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:21:48.953 [2024-11-20 10:39:29.438618] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:48.953 [2024-11-20 10:39:29.518390] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:48.953 [2024-11-20 10:39:29.559700] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:48.953 [2024-11-20 10:39:29.559736] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:48.953 [2024-11-20 10:39:29.559743] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:48.953 [2024-11-20 10:39:29.559749] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:48.953 [2024-11-20 10:39:29.559757] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:48.953 [2024-11-20 10:39:29.561383] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:21:48.953 [2024-11-20 10:39:29.561415] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:21:48.953 [2024-11-20 10:39:29.561519] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:21:48.953 [2024-11-20 10:39:29.561520] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@868 -- # return 0 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:49.888 [2024-11-20 10:39:30.314937] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@23 -- # num_subsystems=({1..10}) 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@25 -- # timing_enter create_subsystems 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@726 -- # xtrace_disable 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@27 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.888 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.889 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@28 -- # for i in "${num_subsystems[@]}" 00:21:49.889 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@29 -- # cat 00:21:49.889 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@36 -- # rpc_cmd 00:21:49.889 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@563 -- # xtrace_disable 00:21:49.889 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:49.889 Malloc1 00:21:49.889 [2024-11-20 10:39:30.422004] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:49.889 Malloc2 00:21:49.889 Malloc3 00:21:49.889 Malloc4 00:21:49.889 Malloc5 00:21:49.889 Malloc6 00:21:50.146 Malloc7 00:21:50.146 Malloc8 00:21:50.146 Malloc9 00:21:50.146 Malloc10 00:21:50.146 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:21:50.146 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@37 -- # timing_exit create_subsystems 00:21:50.146 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@732 -- # xtrace_disable 00:21:50.146 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:50.146 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@149 -- # perfpid=3287751 00:21:50.146 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@148 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 45056 -O 4096 -w randwrite -t 20 -r 'trtype:tcp adrfam:IPV4 traddr:10.0.0.2 trsvcid:4420' -P 4 00:21:50.146 10:39:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@150 -- # sleep 5 00:21:50.404 [2024-11-20 10:39:30.931843] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:21:55.675 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@152 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:55.675 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@155 -- # killprocess 3287437 00:21:55.675 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3287437 ']' 00:21:55.675 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3287437 00:21:55.675 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # uname 00:21:55.675 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:21:55.675 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3287437 00:21:55.675 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:21:55.675 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:21:55.675 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3287437' 00:21:55.675 killing process with pid 3287437 00:21:55.675 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@973 -- # kill 3287437 00:21:55.675 10:39:35 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@978 -- # wait 3287437 00:21:55.675 [2024-11-20 10:39:35.929676] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b40e0 is same with the state(6) to be set 00:21:55.675 [2024-11-20 10:39:35.929720] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b40e0 is same with the state(6) to be set 00:21:55.675 [2024-11-20 10:39:35.929728] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b40e0 is same with the state(6) to be set 00:21:55.675 [2024-11-20 10:39:35.929735] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b40e0 is same with the state(6) to be set 00:21:55.675 [2024-11-20 10:39:35.929742] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b40e0 is same with the state(6) to be set 00:21:55.675 [2024-11-20 10:39:35.929748] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b40e0 is same with the state(6) to be set 00:21:55.675 [2024-11-20 10:39:35.929754] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b40e0 is same with the state(6) to be set 00:21:55.675 [2024-11-20 10:39:35.929760] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b40e0 is same with the state(6) to be set 00:21:55.675 [2024-11-20 10:39:35.929766] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b40e0 is same with the state(6) to be set 00:21:55.675 [2024-11-20 10:39:35.929773] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b40e0 is same with the state(6) to be set 00:21:55.675 [2024-11-20 10:39:35.929778] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b40e0 is same with the state(6) to be set 00:21:55.675 [2024-11-20 10:39:35.931068] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645f60 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931095] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645f60 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931103] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645f60 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931110] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645f60 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931117] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645f60 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931124] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645f60 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931130] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645f60 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1645f60 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931505] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646450 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931529] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646450 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931537] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646450 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931544] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646450 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931551] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646450 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931557] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646450 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931837] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b3c10 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931866] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b3c10 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931874] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b3c10 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931881] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b3c10 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931887] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b3c10 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931893] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b3c10 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931899] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b3c10 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.931905] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15b3c10 is same with the state(6) to be set 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 starting I/O failed: -6 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 [2024-11-20 10:39:35.933974] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647b10 is same with the state(6) to be set 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 starting I/O failed: -6 00:21:55.676 [2024-11-20 10:39:35.933995] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647b10 is same with the state(6) to be set 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 [2024-11-20 10:39:35.934005] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647b10 is same with the state(6) to be set 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 [2024-11-20 10:39:35.934013] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647b10 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.934020] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647b10 is same with the state(6) to be set 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 starting I/O failed: -6 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 starting I/O failed: -6 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 starting I/O failed: -6 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 starting I/O failed: -6 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 starting I/O failed: -6 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 starting I/O failed: -6 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 [2024-11-20 10:39:35.934393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode3, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.676 NVMe io qpair process completion error 00:21:55.676 [2024-11-20 10:39:35.934514] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646ca0 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.934534] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646ca0 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.934553] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646ca0 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.934560] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646ca0 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.934570] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646ca0 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.934576] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646ca0 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.934581] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1646ca0 is same with the state(6) to be set 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 starting I/O failed: -6 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 starting I/O failed: -6 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 starting I/O failed: -6 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 starting I/O failed: -6 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 [2024-11-20 10:39:35.934873] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647170 is same with the state(6) to be set 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 [2024-11-20 10:39:35.934892] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647170 is same with the state(6) to be set 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 [2024-11-20 10:39:35.934898] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647170 is same with the state(6) to be set 00:21:55.676 starting I/O failed: -6 00:21:55.676 [2024-11-20 10:39:35.934906] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647170 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.934912] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647170 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.934918] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647170 is same with the state(6) to be set 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 starting I/O failed: -6 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 starting I/O failed: -6 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 starting I/O failed: -6 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 starting I/O failed: -6 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 Write completed with error (sct=0, sc=8) 00:21:55.676 [2024-11-20 10:39:35.935194] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.676 starting I/O failed: -6 00:21:55.676 [2024-11-20 10:39:35.935332] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647640 is same with the state(6) to be set 00:21:55.676 starting I/O failed: -6 00:21:55.676 [2024-11-20 10:39:35.935351] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647640 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.935359] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647640 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.935365] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647640 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.935371] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647640 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.935377] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647640 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.935387] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647640 is same with the state(6) to be set 00:21:55.676 [2024-11-20 10:39:35.935393] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647640 is same with the state(6) to be set 00:21:55.677 [2024-11-20 10:39:35.935399] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1647640 is same with the state(6) to be set 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 [2024-11-20 10:39:35.935648] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16467d0 is same with the state(6) to be set 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 [2024-11-20 10:39:35.935663] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16467d0 is same with the state(6) to be set 00:21:55.677 [2024-11-20 10:39:35.935669] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16467d0 is same with the state(6) to be set 00:21:55.677 [2024-11-20 10:39:35.935675] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16467d0 is same with the state(6) to be set 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 [2024-11-20 10:39:35.935682] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16467d0 is same with the state(6) to be set 00:21:55.677 [2024-11-20 10:39:35.935688] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16467d0 is same with the state(6) to be set 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 [2024-11-20 10:39:35.936126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 [2024-11-20 10:39:35.937130] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.677 Write completed with error (sct=0, sc=8) 00:21:55.677 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 [2024-11-20 10:39:35.938830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.678 NVMe io qpair process completion error 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 [2024-11-20 10:39:35.939785] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 [2024-11-20 10:39:35.940607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.678 starting I/O failed: -6 00:21:55.678 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 [2024-11-20 10:39:35.941603] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 [2024-11-20 10:39:35.943315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode10, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.679 NVMe io qpair process completion error 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 starting I/O failed: -6 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.679 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 [2024-11-20 10:39:35.944104] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 [2024-11-20 10:39:35.945003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 starting I/O failed: -6 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.680 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 [2024-11-20 10:39:35.946013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.681 starting I/O failed: -6 00:21:55.681 starting I/O failed: -6 00:21:55.681 starting I/O failed: -6 00:21:55.681 starting I/O failed: -6 00:21:55.681 starting I/O failed: -6 00:21:55.681 starting I/O failed: -6 00:21:55.681 starting I/O failed: -6 00:21:55.681 starting I/O failed: -6 00:21:55.681 starting I/O failed: -6 00:21:55.681 starting I/O failed: -6 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 [2024-11-20 10:39:35.947933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode7, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.681 NVMe io qpair process completion error 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 [2024-11-20 10:39:35.948894] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 starting I/O failed: -6 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.681 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 [2024-11-20 10:39:35.949755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 [2024-11-20 10:39:35.950738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.682 starting I/O failed: -6 00:21:55.682 starting I/O failed: -6 00:21:55.682 starting I/O failed: -6 00:21:55.682 starting I/O failed: -6 00:21:55.682 starting I/O failed: -6 00:21:55.682 starting I/O failed: -6 00:21:55.682 starting I/O failed: -6 00:21:55.682 starting I/O failed: -6 00:21:55.682 starting I/O failed: -6 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 starting I/O failed: -6 00:21:55.682 [2024-11-20 10:39:35.954960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode8, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.682 NVMe io qpair process completion error 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.682 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 [2024-11-20 10:39:35.955989] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 [2024-11-20 10:39:35.956959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 [2024-11-20 10:39:35.957950] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.683 starting I/O failed: -6 00:21:55.683 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 [2024-11-20 10:39:35.962024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode5, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.684 NVMe io qpair process completion error 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 [2024-11-20 10:39:35.963618] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 starting I/O failed: -6 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.684 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 [2024-11-20 10:39:35.964508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 [2024-11-20 10:39:35.965518] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.685 Write completed with error (sct=0, sc=8) 00:21:55.685 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 [2024-11-20 10:39:35.967333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode6, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.686 NVMe io qpair process completion error 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 [2024-11-20 10:39:35.968389] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 [2024-11-20 10:39:35.969275] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.686 starting I/O failed: -6 00:21:55.686 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 [2024-11-20 10:39:35.970253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 [2024-11-20 10:39:35.971820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode9, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.687 NVMe io qpair process completion error 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.687 starting I/O failed: -6 00:21:55.687 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 [2024-11-20 10:39:35.974139] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.688 Write completed with error (sct=0, sc=8) 00:21:55.688 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 [2024-11-20 10:39:35.978928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode2, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.689 NVMe io qpair process completion error 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 [2024-11-20 10:39:35.979877] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 [2024-11-20 10:39:35.980783] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.689 starting I/O failed: -6 00:21:55.689 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 [2024-11-20 10:39:35.981799] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 starting I/O failed: -6 00:21:55.690 [2024-11-20 10:39:35.984519] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode4, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:21:55.690 NVMe io qpair process completion error 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.690 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Write completed with error (sct=0, sc=8) 00:21:55.691 Initializing NVMe Controllers 00:21:55.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode7 00:21:55.691 Controller IO queue size 128, less than required. 00:21:55.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode8 00:21:55.691 Controller IO queue size 128, less than required. 00:21:55.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode5 00:21:55.691 Controller IO queue size 128, less than required. 00:21:55.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode6 00:21:55.691 Controller IO queue size 128, less than required. 00:21:55.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode9 00:21:55.691 Controller IO queue size 128, less than required. 00:21:55.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode2 00:21:55.691 Controller IO queue size 128, less than required. 00:21:55.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode4 00:21:55.691 Controller IO queue size 128, less than required. 00:21:55.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode3 00:21:55.691 Controller IO queue size 128, less than required. 00:21:55.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:55.691 Controller IO queue size 128, less than required. 00:21:55.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.691 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode10 00:21:55.691 Controller IO queue size 128, less than required. 00:21:55.691 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:21:55.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 with lcore 0 00:21:55.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 with lcore 0 00:21:55.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 with lcore 0 00:21:55.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 with lcore 0 00:21:55.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 with lcore 0 00:21:55.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 with lcore 0 00:21:55.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 with lcore 0 00:21:55.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 with lcore 0 00:21:55.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:21:55.691 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 with lcore 0 00:21:55.691 Initialization complete. Launching workers. 00:21:55.691 ======================================================== 00:21:55.691 Latency(us) 00:21:55.691 Device Information : IOPS MiB/s Average min max 00:21:55.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode7) NSID 1 from core 0: 2263.56 97.26 56552.00 898.83 105809.48 00:21:55.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode8) NSID 1 from core 0: 2203.16 94.67 58116.89 874.95 104750.26 00:21:55.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode5) NSID 1 from core 0: 2149.11 92.34 59620.46 691.02 104408.93 00:21:55.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode6) NSID 1 from core 0: 2147.84 92.29 59704.33 904.97 109398.73 00:21:55.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode9) NSID 1 from core 0: 2195.74 94.35 58415.15 745.53 112293.46 00:21:55.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode2) NSID 1 from core 0: 2199.56 94.51 58317.99 666.24 99637.49 00:21:55.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode4) NSID 1 from core 0: 2218.84 95.34 57858.30 845.18 118399.23 00:21:55.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode3) NSID 1 from core 0: 2207.40 94.85 58121.08 542.41 100556.76 00:21:55.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2277.76 97.87 55742.81 638.04 97304.27 00:21:55.691 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode10) NSID 1 from core 0: 2277.76 97.87 55751.27 753.16 97199.80 00:21:55.691 ======================================================== 00:21:55.691 Total : 22140.74 951.36 57793.30 542.41 118399.23 00:21:55.691 00:21:55.691 [2024-11-20 10:39:35.990339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ae410 is same with the state(6) to be set 00:21:55.691 [2024-11-20 10:39:35.990386] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ae740 is same with the state(6) to be set 00:21:55.691 [2024-11-20 10:39:35.990415] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22adbc0 is same with the state(6) to be set 00:21:55.691 [2024-11-20 10:39:35.990443] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22adef0 is same with the state(6) to be set 00:21:55.691 [2024-11-20 10:39:35.990470] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22aea70 is same with the state(6) to be set 00:21:55.691 [2024-11-20 10:39:35.990497] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22afae0 is same with the state(6) to be set 00:21:55.691 [2024-11-20 10:39:35.990523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ad890 is same with the state(6) to be set 00:21:55.691 [2024-11-20 10:39:35.990551] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22ad560 is same with the state(6) to be set 00:21:55.691 [2024-11-20 10:39:35.990580] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22af720 is same with the state(6) to be set 00:21:55.691 [2024-11-20 10:39:35.990609] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x22af900 is same with the state(6) to be set 00:21:55.691 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:21:55.691 10:39:36 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@156 -- # sleep 1 00:21:56.628 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@158 -- # NOT wait 3287751 00:21:56.628 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@652 -- # local es=0 00:21:56.628 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3287751 00:21:56.628 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@640 -- # local arg=wait 00:21:56.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:56.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # type -t wait 00:21:56.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:21:56.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # wait 3287751 00:21:56.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@655 -- # es=1 00:21:56.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:21:56.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:21:56.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:21:56.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@159 -- # stoptarget 00:21:56.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@42 -- # rm -f ./local-job0-0-verify.state 00:21:56.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:56.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@44 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:56.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- target/shutdown.sh@46 -- # nvmftestfini 00:21:56.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@335 -- # nvmfcleanup 00:21:56.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@99 -- # sync 00:21:56.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:21:56.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@102 -- # set +e 00:21:56.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@103 -- # for i in {1..20} 00:21:56.629 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:21:56.629 rmmod nvme_tcp 00:21:56.629 rmmod nvme_fabrics 00:21:56.887 rmmod nvme_keyring 00:21:56.887 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:21:56.887 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@106 -- # set -e 00:21:56.887 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@107 -- # return 0 00:21:56.887 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@336 -- # '[' -n 3287437 ']' 00:21:56.887 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@337 -- # killprocess 3287437 00:21:56.887 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@954 -- # '[' -z 3287437 ']' 00:21:56.887 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@958 -- # kill -0 3287437 00:21:56.887 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3287437) - No such process 00:21:56.887 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@981 -- # echo 'Process with pid 3287437 is not found' 00:21:56.887 Process with pid 3287437 is not found 00:21:56.887 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:21:56.887 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@342 -- # nvmf_fini 00:21:56.887 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@264 -- # local dev 00:21:56.887 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@267 -- # remove_target_ns 00:21:56.887 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:56.887 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:56.887 10:39:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@268 -- # delete_main_bridge 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@130 -- # return 0 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@41 -- # _dev=0 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@41 -- # dev_map=() 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/setup.sh@284 -- # iptr 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@542 -- # iptables-save 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- nvmf/common.sh@542 -- # iptables-restore 00:21:58.792 00:21:58.792 real 0m10.538s 00:21:58.792 user 0m27.608s 00:21:58.792 sys 0m5.231s 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc4 -- common/autotest_common.sh@10 -- # set +x 00:21:58.792 ************************************ 00:21:58.792 END TEST nvmf_shutdown_tc4 00:21:58.792 ************************************ 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@170 -- # trap - SIGINT SIGTERM EXIT 00:21:58.792 00:21:58.792 real 0m42.480s 00:21:58.792 user 1m45.560s 00:21:58.792 sys 0m14.148s 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:21:58.792 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:58.792 ************************************ 00:21:58.792 END TEST nvmf_shutdown 00:21:58.792 ************************************ 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@67 -- # run_test nvmf_nsid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1111 -- # xtrace_disable 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:59.050 ************************************ 00:21:59.050 START TEST nvmf_nsid 00:21:59.050 ************************************ 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nsid.sh --transport=tcp 00:21:59.050 * Looking for test storage... 00:21:59.050 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lcov --version 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@333 -- # local ver1 ver1_l 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@334 -- # local ver2 ver2_l 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # IFS=.-: 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@336 -- # read -ra ver1 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # IFS=.-: 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@337 -- # read -ra ver2 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@338 -- # local 'op=<' 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@340 -- # ver1_l=2 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@341 -- # ver2_l=1 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@344 -- # case "$op" in 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@345 -- # : 1 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v = 0 )) 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # decimal 1 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=1 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 1 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@365 -- # ver1[v]=1 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # decimal 2 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@353 -- # local d=2 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@355 -- # echo 2 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@366 -- # ver2[v]=2 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@368 -- # return 0 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:21:59.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.050 --rc genhtml_branch_coverage=1 00:21:59.050 --rc genhtml_function_coverage=1 00:21:59.050 --rc genhtml_legend=1 00:21:59.050 --rc geninfo_all_blocks=1 00:21:59.050 --rc geninfo_unexecuted_blocks=1 00:21:59.050 00:21:59.050 ' 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:21:59.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.050 --rc genhtml_branch_coverage=1 00:21:59.050 --rc genhtml_function_coverage=1 00:21:59.050 --rc genhtml_legend=1 00:21:59.050 --rc geninfo_all_blocks=1 00:21:59.050 --rc geninfo_unexecuted_blocks=1 00:21:59.050 00:21:59.050 ' 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:21:59.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.050 --rc genhtml_branch_coverage=1 00:21:59.050 --rc genhtml_function_coverage=1 00:21:59.050 --rc genhtml_legend=1 00:21:59.050 --rc geninfo_all_blocks=1 00:21:59.050 --rc geninfo_unexecuted_blocks=1 00:21:59.050 00:21:59.050 ' 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:21:59.050 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:21:59.050 --rc genhtml_branch_coverage=1 00:21:59.050 --rc genhtml_function_coverage=1 00:21:59.050 --rc genhtml_legend=1 00:21:59.050 --rc geninfo_all_blocks=1 00:21:59.050 --rc geninfo_unexecuted_blocks=1 00:21:59.050 00:21:59.050 ' 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # uname -s 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:21:59.050 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:21:59.308 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:59.308 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:59.308 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@15 -- # shopt -s extglob 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@5 -- # export PATH 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@50 -- # : 0 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:21:59.309 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@54 -- # have_pci_nics=0 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@11 -- # subnqn1=nqn.2024-10.io.spdk:cnode0 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@12 -- # subnqn2=nqn.2024-10.io.spdk:cnode1 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@13 -- # subnqn3=nqn.2024-10.io.spdk:cnode2 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@14 -- # tgt2sock=/var/tmp/tgt2.sock 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@15 -- # tgt2pid= 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@46 -- # nvmftestinit 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@296 -- # prepare_net_devs 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@258 -- # local -g is_hw=no 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@260 -- # remove_target_ns 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@125 -- # xtrace_disable 00:21:59.309 10:39:39 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@131 -- # pci_devs=() 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@135 -- # net_devs=() 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@136 -- # e810=() 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@136 -- # local -ga e810 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@137 -- # x722=() 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@137 -- # local -ga x722 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@138 -- # mlx=() 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@138 -- # local -ga mlx 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:05.874 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:05.874 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:05.875 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:05.875 Found net devices under 0000:86:00.0: cvl_0_0 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:05.875 Found net devices under 0000:86:00.1: cvl_0_1 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@262 -- # is_hw=yes 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@257 -- # create_target_ns 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@28 -- # local -g _dev 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # ips=() 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772161 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:22:05.875 10.0.0.1 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@11 -- # local val=167772162 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:22:05.875 10.0.0.2 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:05.875 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@38 -- # ping_ips 1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:05.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:05.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:22:05.876 00:22:05.876 --- 10.0.0.1 ping statistics --- 00:22:05.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.876 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=target0 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:22:05.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:05.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.068 ms 00:22:05.876 00:22:05.876 --- 10.0.0.2 ping statistics --- 00:22:05.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:05.876 rtt min/avg/max/mdev = 0.068/0.068/0.068/0.000 ms 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair++ )) 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@270 -- # return 0 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=initiator1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # return 1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev= 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@169 -- # return 0 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=target0 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:05.876 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # get_net_dev target1 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@107 -- # local dev=target1 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@109 -- # return 1 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@168 -- # dev= 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@169 -- # return 0 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@47 -- # nvmfappstart -m 1 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@328 -- # nvmfpid=3292272 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 1 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@329 -- # waitforlisten 3292272 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3292272 ']' 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:05.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.877 10:39:45 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:05.877 [2024-11-20 10:39:45.905149] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:22:05.877 [2024-11-20 10:39:45.905193] [ DPDK EAL parameters: nvmf -c 1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:05.877 [2024-11-20 10:39:45.985413] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.877 [2024-11-20 10:39:46.025642] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:05.877 [2024-11-20 10:39:46.025677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:05.877 [2024-11-20 10:39:46.025684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:05.877 [2024-11-20 10:39:46.025690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:05.877 [2024-11-20 10:39:46.025695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:05.877 [2024-11-20 10:39:46.026253] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@49 -- # trap cleanup SIGINT SIGTERM EXIT 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@52 -- # tgt2pid=3292292 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/tgt2.sock 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@54 -- # tgt1addr=10.0.0.2 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@55 -- # tgt2addr=10.0.0.1 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # uuidgen 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@56 -- # ns1uuid=73c1c184-9363-402b-838a-b8d40b24ce9d 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # uuidgen 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@57 -- # ns2uuid=3a7145f4-1bc9-4990-ae76-69f97ded8c00 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # uuidgen 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@58 -- # ns3uuid=8a690813-a944-4303-a6eb-e7f21a9b643b 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@63 -- # rpc_cmd 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:05.877 null0 00:22:05.877 null1 00:22:05.877 null2 00:22:05.877 [2024-11-20 10:39:46.207714] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:05.877 [2024-11-20 10:39:46.209684] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:22:05.877 [2024-11-20 10:39:46.209726] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3292292 ] 00:22:05.877 [2024-11-20 10:39:46.231903] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@79 -- # waitforlisten 3292292 /var/tmp/tgt2.sock 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@835 -- # '[' -z 3292292 ']' 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/tgt2.sock 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock...' 00:22:05.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/tgt2.sock... 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:05.877 [2024-11-20 10:39:46.281643] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:05.877 [2024-11-20 10:39:46.322379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@868 -- # return 0 00:22:05.877 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/tgt2.sock 00:22:06.135 [2024-11-20 10:39:46.855111] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:06.393 [2024-11-20 10:39:46.871230] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.1 port 4421 *** 00:22:06.393 nvme0n1 nvme0n2 00:22:06.393 nvme1n1 00:22:06.393 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # nvme_connect 00:22:06.393 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@23 -- # local ctrlr 00:22:06.393 10:39:46 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@25 -- # nvme connect -t tcp -a 10.0.0.1 -s 4421 -n nqn.2024-10.io.spdk:cnode2 --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:07.326 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@28 -- # for ctrlr in /sys/class/nvme/nvme* 00:22:07.326 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ -e /sys/class/nvme/nvme0/subsysnqn ]] 00:22:07.326 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@30 -- # [[ nqn.2024-10.io.spdk:cnode2 == \n\q\n\.\2\0\2\4\-\1\0\.\i\o\.\s\p\d\k\:\c\n\o\d\e\2 ]] 00:22:07.326 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@31 -- # echo nvme0 00:22:07.326 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@32 -- # return 0 00:22:07.326 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@94 -- # ctrlr=nvme0 00:22:07.326 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@95 -- # waitforblk nvme0n1 00:22:07.327 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:07.327 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:07.327 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:07.327 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1241 -- # '[' 0 -lt 15 ']' 00:22:07.327 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1242 -- # i=1 00:22:07.327 10:39:47 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1243 -- # sleep 1 00:22:08.698 10:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:08.698 10:39:48 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n1 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n1 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # uuid2nguid 73c1c184-9363-402b-838a-b8d40b24ce9d 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@538 -- # tr -d - 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # nvme_get_nguid nvme0 1 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=1 nguid 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n1 -o json 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=73c1c1849363402b838ab8d40b24ce9d 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 73C1C1849363402B838AB8D40B24CE9D 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@96 -- # [[ 73C1C1849363402B838AB8D40B24CE9D == \7\3\C\1\C\1\8\4\9\3\6\3\4\0\2\B\8\3\8\A\B\8\D\4\0\B\2\4\C\E\9\D ]] 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@97 -- # waitforblk nvme0n2 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n2 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n2 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # uuid2nguid 3a7145f4-1bc9-4990-ae76-69f97ded8c00 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@538 -- # tr -d - 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # nvme_get_nguid nvme0 2 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=2 nguid 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n2 -o json 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=3a7145f41bc94990ae7669f97ded8c00 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 3A7145F41BC94990AE7669F97DED8C00 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@98 -- # [[ 3A7145F41BC94990AE7669F97DED8C00 == \3\A\7\1\4\5\F\4\1\B\C\9\4\9\9\0\A\E\7\6\6\9\F\9\7\D\E\D\8\C\0\0 ]] 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@99 -- # waitforblk nvme0n3 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1239 -- # local i=0 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # lsblk -l -o NAME 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1240 -- # grep -q -w nvme0n3 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # lsblk -l -o NAME 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1246 -- # grep -q -w nvme0n3 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1250 -- # return 0 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # uuid2nguid 8a690813-a944-4303-a6eb-e7f21a9b643b 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@538 -- # tr -d - 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # nvme_get_nguid nvme0 3 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@40 -- # local ctrlr=nvme0 nsid=3 nguid 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nvme id-ns /dev/nvme0n3 -o json 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # jq -r .nguid 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@42 -- # nguid=8a690813a9444303a6ebe7f21a9b643b 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@43 -- # echo 8A690813A9444303A6EBE7F21A9B643B 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@100 -- # [[ 8A690813A9444303A6EBE7F21A9B643B == \8\A\6\9\0\8\1\3\A\9\4\4\4\3\0\3\A\6\E\B\E\7\F\2\1\A\9\B\6\4\3\B ]] 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@101 -- # nvme disconnect -d /dev/nvme0 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@103 -- # trap - SIGINT SIGTERM EXIT 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@104 -- # cleanup 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@18 -- # killprocess 3292292 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3292292 ']' 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3292292 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:08.698 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3292292 00:22:08.956 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:22:08.956 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:22:08.956 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3292292' 00:22:08.956 killing process with pid 3292292 00:22:08.956 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3292292 00:22:08.956 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3292292 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- target/nsid.sh@19 -- # nvmftestfini 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@99 -- # sync 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@102 -- # set +e 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:22:09.214 rmmod nvme_tcp 00:22:09.214 rmmod nvme_fabrics 00:22:09.214 rmmod nvme_keyring 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@106 -- # set -e 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@107 -- # return 0 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@336 -- # '[' -n 3292272 ']' 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@337 -- # killprocess 3292272 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@954 -- # '[' -z 3292272 ']' 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@958 -- # kill -0 3292272 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # uname 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3292272 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3292272' 00:22:09.214 killing process with pid 3292272 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@973 -- # kill 3292272 00:22:09.214 10:39:49 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@978 -- # wait 3292272 00:22:09.472 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:09.472 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@342 -- # nvmf_fini 00:22:09.472 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@264 -- # local dev 00:22:09.472 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@267 -- # remove_target_ns 00:22:09.472 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:09.472 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:09.472 10:39:50 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:11.376 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@268 -- # delete_main_bridge 00:22:11.376 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:11.376 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@130 -- # return 0 00:22:11.376 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:11.376 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:22:11.376 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:11.376 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:22:11.376 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:22:11.376 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:11.376 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:22:11.376 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # _dev=0 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@41 -- # dev_map=() 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/setup.sh@284 -- # iptr 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@542 -- # iptables-save 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- nvmf/common.sh@542 -- # iptables-restore 00:22:11.635 00:22:11.635 real 0m12.536s 00:22:11.635 user 0m9.650s 00:22:11.635 sys 0m5.667s 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra.nvmf_nsid -- common/autotest_common.sh@10 -- # set +x 00:22:11.635 ************************************ 00:22:11.635 END TEST nvmf_nsid 00:22:11.635 ************************************ 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:22:11.635 00:22:11.635 real 12m3.717s 00:22:11.635 user 25m48.799s 00:22:11.635 sys 3m47.269s 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:11.635 10:39:52 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:22:11.635 ************************************ 00:22:11.635 END TEST nvmf_target_extra 00:22:11.635 ************************************ 00:22:11.635 10:39:52 nvmf_tcp -- nvmf/nvmf.sh@12 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:11.635 10:39:52 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:11.635 10:39:52 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.635 10:39:52 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:22:11.635 ************************************ 00:22:11.635 START TEST nvmf_host 00:22:11.635 ************************************ 00:22:11.635 10:39:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:22:11.635 * Looking for test storage... 00:22:11.635 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:22:11.635 10:39:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:11.635 10:39:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lcov --version 00:22:11.635 10:39:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # IFS=.-: 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@336 -- # read -ra ver1 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # IFS=.-: 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@337 -- # read -ra ver2 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@338 -- # local 'op=<' 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@340 -- # ver1_l=2 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@341 -- # ver2_l=1 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@344 -- # case "$op" in 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@345 -- # : 1 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # decimal 1 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=1 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 1 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@365 -- # ver1[v]=1 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # decimal 2 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@353 -- # local d=2 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@355 -- # echo 2 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@366 -- # ver2[v]=2 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@368 -- # return 0 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:11.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.894 --rc genhtml_branch_coverage=1 00:22:11.894 --rc genhtml_function_coverage=1 00:22:11.894 --rc genhtml_legend=1 00:22:11.894 --rc geninfo_all_blocks=1 00:22:11.894 --rc geninfo_unexecuted_blocks=1 00:22:11.894 00:22:11.894 ' 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:11.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.894 --rc genhtml_branch_coverage=1 00:22:11.894 --rc genhtml_function_coverage=1 00:22:11.894 --rc genhtml_legend=1 00:22:11.894 --rc geninfo_all_blocks=1 00:22:11.894 --rc geninfo_unexecuted_blocks=1 00:22:11.894 00:22:11.894 ' 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:11.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.894 --rc genhtml_branch_coverage=1 00:22:11.894 --rc genhtml_function_coverage=1 00:22:11.894 --rc genhtml_legend=1 00:22:11.894 --rc geninfo_all_blocks=1 00:22:11.894 --rc geninfo_unexecuted_blocks=1 00:22:11.894 00:22:11.894 ' 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:11.894 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:11.894 --rc genhtml_branch_coverage=1 00:22:11.894 --rc genhtml_function_coverage=1 00:22:11.894 --rc genhtml_legend=1 00:22:11.894 --rc geninfo_all_blocks=1 00:22:11.894 --rc geninfo_unexecuted_blocks=1 00:22:11.894 00:22:11.894 ' 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:22:11.894 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@15 -- # shopt -s extglob 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@50 -- # : 0 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:11.895 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.895 ************************************ 00:22:11.895 START TEST nvmf_aer 00:22:11.895 ************************************ 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:11.895 * Looking for test storage... 00:22:11.895 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lcov --version 00:22:11.895 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # IFS=.-: 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@336 -- # read -ra ver1 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # IFS=.-: 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@337 -- # read -ra ver2 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@338 -- # local 'op=<' 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@340 -- # ver1_l=2 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@341 -- # ver2_l=1 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@344 -- # case "$op" in 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@345 -- # : 1 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # decimal 1 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=1 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 1 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@365 -- # ver1[v]=1 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # decimal 2 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@353 -- # local d=2 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@355 -- # echo 2 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@366 -- # ver2[v]=2 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@368 -- # return 0 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:12.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.155 --rc genhtml_branch_coverage=1 00:22:12.155 --rc genhtml_function_coverage=1 00:22:12.155 --rc genhtml_legend=1 00:22:12.155 --rc geninfo_all_blocks=1 00:22:12.155 --rc geninfo_unexecuted_blocks=1 00:22:12.155 00:22:12.155 ' 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:12.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.155 --rc genhtml_branch_coverage=1 00:22:12.155 --rc genhtml_function_coverage=1 00:22:12.155 --rc genhtml_legend=1 00:22:12.155 --rc geninfo_all_blocks=1 00:22:12.155 --rc geninfo_unexecuted_blocks=1 00:22:12.155 00:22:12.155 ' 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:12.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.155 --rc genhtml_branch_coverage=1 00:22:12.155 --rc genhtml_function_coverage=1 00:22:12.155 --rc genhtml_legend=1 00:22:12.155 --rc geninfo_all_blocks=1 00:22:12.155 --rc geninfo_unexecuted_blocks=1 00:22:12.155 00:22:12.155 ' 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:12.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:12.155 --rc genhtml_branch_coverage=1 00:22:12.155 --rc genhtml_function_coverage=1 00:22:12.155 --rc genhtml_legend=1 00:22:12.155 --rc geninfo_all_blocks=1 00:22:12.155 --rc geninfo_unexecuted_blocks=1 00:22:12.155 00:22:12.155 ' 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@15 -- # shopt -s extglob 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:12.155 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@50 -- # : 0 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:12.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # remove_target_ns 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # xtrace_disable 00:22:12.156 10:39:52 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@131 -- # pci_devs=() 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@135 -- # net_devs=() 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@136 -- # e810=() 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@136 -- # local -ga e810 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@137 -- # x722=() 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@137 -- # local -ga x722 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@138 -- # mlx=() 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@138 -- # local -ga mlx 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:18.719 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:18.719 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:18.719 Found net devices under 0000:86:00.0: cvl_0_0 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:18.719 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:18.720 Found net devices under 0000:86:00.1: cvl_0_1 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@262 -- # is_hw=yes 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@257 -- # create_target_ns 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@28 -- # local -g _dev 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # ips=() 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772161 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:22:18.720 10.0.0.1 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@11 -- # local val=167772162 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:22:18.720 10.0.0.2 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@38 -- # ping_ips 1 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:18.720 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:18.721 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:18.721 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.320 ms 00:22:18.721 00:22:18.721 --- 10.0.0.1 ping statistics --- 00:22:18.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.721 rtt min/avg/max/mdev = 0.320/0.320/0.320/0.000 ms 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=target0 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:22:18.721 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:18.721 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.196 ms 00:22:18.721 00:22:18.721 --- 10.0.0.2 ping statistics --- 00:22:18.721 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:18.721 rtt min/avg/max/mdev = 0.196/0.196/0.196/0.000 ms 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair++ )) 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # return 0 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=initiator1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # return 1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev= 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@169 -- # return 0 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=target0 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # get_net_dev target1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@107 -- # local dev=target1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@109 -- # return 1 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@168 -- # dev= 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@169 -- # return 0 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@328 -- # nvmfpid=3296619 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # waitforlisten 3296619 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@835 -- # '[' -z 3296619 ']' 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:18.721 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:18.722 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:18.722 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:18.722 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:18.722 10:39:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.722 [2024-11-20 10:39:58.808346] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:22:18.722 [2024-11-20 10:39:58.808391] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:18.722 [2024-11-20 10:39:58.886013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:18.722 [2024-11-20 10:39:58.927622] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:18.722 [2024-11-20 10:39:58.927661] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:18.722 [2024-11-20 10:39:58.927667] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:18.722 [2024-11-20 10:39:58.927673] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:18.722 [2024-11-20 10:39:58.927678] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:18.722 [2024-11-20 10:39:58.929167] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:18.722 [2024-11-20 10:39:58.929278] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:18.722 [2024-11-20 10:39:58.929308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:18.722 [2024-11-20 10:39:58.929308] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:18.980 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:18.980 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@868 -- # return 0 00:22:18.980 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:18.980 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:18.980 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.980 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:18.980 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:18.980 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.980 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:18.980 [2024-11-20 10:39:59.695026] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:18.980 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:18.980 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:18.980 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:18.980 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:19.238 Malloc0 00:22:19.238 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.238 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:19.238 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.238 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:19.238 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.238 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:19.238 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.238 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:19.238 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.238 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:19.238 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.238 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:19.238 [2024-11-20 10:39:59.762157] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:19.238 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.238 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:19.238 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.238 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:19.238 [ 00:22:19.238 { 00:22:19.238 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:19.238 "subtype": "Discovery", 00:22:19.238 "listen_addresses": [], 00:22:19.238 "allow_any_host": true, 00:22:19.238 "hosts": [] 00:22:19.238 }, 00:22:19.238 { 00:22:19.238 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.239 "subtype": "NVMe", 00:22:19.239 "listen_addresses": [ 00:22:19.239 { 00:22:19.239 "trtype": "TCP", 00:22:19.239 "adrfam": "IPv4", 00:22:19.239 "traddr": "10.0.0.2", 00:22:19.239 "trsvcid": "4420" 00:22:19.239 } 00:22:19.239 ], 00:22:19.239 "allow_any_host": true, 00:22:19.239 "hosts": [], 00:22:19.239 "serial_number": "SPDK00000000000001", 00:22:19.239 "model_number": "SPDK bdev Controller", 00:22:19.239 "max_namespaces": 2, 00:22:19.239 "min_cntlid": 1, 00:22:19.239 "max_cntlid": 65519, 00:22:19.239 "namespaces": [ 00:22:19.239 { 00:22:19.239 "nsid": 1, 00:22:19.239 "bdev_name": "Malloc0", 00:22:19.239 "name": "Malloc0", 00:22:19.239 "nguid": "CE7AE52F7D5748409662639C20FB9373", 00:22:19.239 "uuid": "ce7ae52f-7d57-4840-9662-639c20fb9373" 00:22:19.239 } 00:22:19.239 ] 00:22:19.239 } 00:22:19.239 ] 00:22:19.239 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.239 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:19.239 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:19.239 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=3296868 00:22:19.239 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:19.239 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:19.239 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # local i=0 00:22:19.239 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:19.239 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 0 -lt 200 ']' 00:22:19.239 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=1 00:22:19.239 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:19.239 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:19.239 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 1 -lt 200 ']' 00:22:19.239 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=2 00:22:19.239 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:19.496 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:19.496 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1271 -- # '[' 2 -lt 200 ']' 00:22:19.496 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # i=3 00:22:19.496 10:39:59 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1273 -- # sleep 0.1 00:22:19.496 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1270 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:19.496 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:19.496 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1280 -- # return 0 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:19.497 Malloc1 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:19.497 Asynchronous Event Request test 00:22:19.497 Attaching to 10.0.0.2 00:22:19.497 Attached to 10.0.0.2 00:22:19.497 Registering asynchronous event callbacks... 00:22:19.497 Starting namespace attribute notice tests for all controllers... 00:22:19.497 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:19.497 aer_cb - Changed Namespace 00:22:19.497 Cleaning up... 00:22:19.497 [ 00:22:19.497 { 00:22:19.497 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:19.497 "subtype": "Discovery", 00:22:19.497 "listen_addresses": [], 00:22:19.497 "allow_any_host": true, 00:22:19.497 "hosts": [] 00:22:19.497 }, 00:22:19.497 { 00:22:19.497 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:19.497 "subtype": "NVMe", 00:22:19.497 "listen_addresses": [ 00:22:19.497 { 00:22:19.497 "trtype": "TCP", 00:22:19.497 "adrfam": "IPv4", 00:22:19.497 "traddr": "10.0.0.2", 00:22:19.497 "trsvcid": "4420" 00:22:19.497 } 00:22:19.497 ], 00:22:19.497 "allow_any_host": true, 00:22:19.497 "hosts": [], 00:22:19.497 "serial_number": "SPDK00000000000001", 00:22:19.497 "model_number": "SPDK bdev Controller", 00:22:19.497 "max_namespaces": 2, 00:22:19.497 "min_cntlid": 1, 00:22:19.497 "max_cntlid": 65519, 00:22:19.497 "namespaces": [ 00:22:19.497 { 00:22:19.497 "nsid": 1, 00:22:19.497 "bdev_name": "Malloc0", 00:22:19.497 "name": "Malloc0", 00:22:19.497 "nguid": "CE7AE52F7D5748409662639C20FB9373", 00:22:19.497 "uuid": "ce7ae52f-7d57-4840-9662-639c20fb9373" 00:22:19.497 }, 00:22:19.497 { 00:22:19.497 "nsid": 2, 00:22:19.497 "bdev_name": "Malloc1", 00:22:19.497 "name": "Malloc1", 00:22:19.497 "nguid": "23BFF57318C94E2DBCD2812E1650B585", 00:22:19.497 "uuid": "23bff573-18c9-4e2d-bcd2-812e1650b585" 00:22:19.497 } 00:22:19.497 ] 00:22:19.497 } 00:22:19.497 ] 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 3296868 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:19.497 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@99 -- # sync 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@102 -- # set +e 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:22:19.755 rmmod nvme_tcp 00:22:19.755 rmmod nvme_fabrics 00:22:19.755 rmmod nvme_keyring 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@106 -- # set -e 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@107 -- # return 0 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@336 -- # '[' -n 3296619 ']' 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@337 -- # killprocess 3296619 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # '[' -z 3296619 ']' 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # kill -0 3296619 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # uname 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3296619 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3296619' 00:22:19.755 killing process with pid 3296619 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@973 -- # kill 3296619 00:22:19.755 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@978 -- # wait 3296619 00:22:20.014 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:20.014 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # nvmf_fini 00:22:20.014 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@264 -- # local dev 00:22:20.014 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@267 -- # remove_target_ns 00:22:20.014 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:20.014 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:20.014 10:40:00 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@268 -- # delete_main_bridge 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@130 -- # return 0 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # _dev=0 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@41 -- # dev_map=() 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/setup.sh@284 -- # iptr 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@542 -- # iptables-save 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@542 -- # iptables-restore 00:22:21.917 00:22:21.917 real 0m10.119s 00:22:21.917 user 0m8.279s 00:22:21.917 sys 0m5.012s 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:21.917 ************************************ 00:22:21.917 END TEST nvmf_aer 00:22:21.917 ************************************ 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:21.917 10:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:22.176 ************************************ 00:22:22.176 START TEST nvmf_async_init 00:22:22.176 ************************************ 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:22.176 * Looking for test storage... 00:22:22.176 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lcov --version 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # IFS=.-: 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@336 -- # read -ra ver1 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # IFS=.-: 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@337 -- # read -ra ver2 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@338 -- # local 'op=<' 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@340 -- # ver1_l=2 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@341 -- # ver2_l=1 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@344 -- # case "$op" in 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@345 -- # : 1 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # decimal 1 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=1 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 1 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@365 -- # ver1[v]=1 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # decimal 2 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@353 -- # local d=2 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@355 -- # echo 2 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@366 -- # ver2[v]=2 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@368 -- # return 0 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:22.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.176 --rc genhtml_branch_coverage=1 00:22:22.176 --rc genhtml_function_coverage=1 00:22:22.176 --rc genhtml_legend=1 00:22:22.176 --rc geninfo_all_blocks=1 00:22:22.176 --rc geninfo_unexecuted_blocks=1 00:22:22.176 00:22:22.176 ' 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:22.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.176 --rc genhtml_branch_coverage=1 00:22:22.176 --rc genhtml_function_coverage=1 00:22:22.176 --rc genhtml_legend=1 00:22:22.176 --rc geninfo_all_blocks=1 00:22:22.176 --rc geninfo_unexecuted_blocks=1 00:22:22.176 00:22:22.176 ' 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:22.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.176 --rc genhtml_branch_coverage=1 00:22:22.176 --rc genhtml_function_coverage=1 00:22:22.176 --rc genhtml_legend=1 00:22:22.176 --rc geninfo_all_blocks=1 00:22:22.176 --rc geninfo_unexecuted_blocks=1 00:22:22.176 00:22:22.176 ' 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:22.176 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:22.176 --rc genhtml_branch_coverage=1 00:22:22.176 --rc genhtml_function_coverage=1 00:22:22.176 --rc genhtml_legend=1 00:22:22.176 --rc geninfo_all_blocks=1 00:22:22.176 --rc geninfo_unexecuted_blocks=1 00:22:22.176 00:22:22.176 ' 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@15 -- # shopt -s extglob 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:22.176 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@50 -- # : 0 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:22.177 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=22402739928a4ec3a89d75052d570f72 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # remove_target_ns 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # xtrace_disable 00:22:22.177 10:40:02 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@131 -- # pci_devs=() 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@135 -- # net_devs=() 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@136 -- # e810=() 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@136 -- # local -ga e810 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@137 -- # x722=() 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@137 -- # local -ga x722 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@138 -- # mlx=() 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@138 -- # local -ga mlx 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:28.743 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:28.743 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:28.744 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:28.744 Found net devices under 0000:86:00.0: cvl_0_0 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:28.744 Found net devices under 0000:86:00.1: cvl_0_1 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@262 -- # is_hw=yes 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@257 -- # create_target_ns 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@28 -- # local -g _dev 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # ips=() 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772161 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:22:28.744 10.0.0.1 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@11 -- # local val=167772162 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:22:28.744 10.0.0.2 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:22:28.744 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@38 -- # ping_ips 1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:28.745 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:28.745 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.262 ms 00:22:28.745 00:22:28.745 --- 10.0.0.1 ping statistics --- 00:22:28.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.745 rtt min/avg/max/mdev = 0.262/0.262/0.262/0.000 ms 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=target0 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:22:28.745 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:28.745 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.145 ms 00:22:28.745 00:22:28.745 --- 10.0.0.2 ping statistics --- 00:22:28.745 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:28.745 rtt min/avg/max/mdev = 0.145/0.145/0.145/0.000 ms 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair++ )) 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # return 0 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=initiator1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # return 1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev= 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@169 -- # return 0 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=target0 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:28.745 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # get_net_dev target1 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@107 -- # local dev=target1 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@109 -- # return 1 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@168 -- # dev= 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@169 -- # return 0 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@328 -- # nvmfpid=3300424 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # waitforlisten 3300424 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@835 -- # '[' -z 3300424 ']' 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:28.746 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:28.746 10:40:08 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.746 [2024-11-20 10:40:09.021940] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:22:28.746 [2024-11-20 10:40:09.021982] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:28.746 [2024-11-20 10:40:09.083954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:28.746 [2024-11-20 10:40:09.124959] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:28.746 [2024-11-20 10:40:09.124993] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:28.746 [2024-11-20 10:40:09.125000] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:28.746 [2024-11-20 10:40:09.125006] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:28.746 [2024-11-20 10:40:09.125011] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:28.746 [2024-11-20 10:40:09.125594] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@868 -- # return 0 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.746 [2024-11-20 10:40:09.255319] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.746 null0 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g 22402739928a4ec3a89d75052d570f72 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:28.746 [2024-11-20 10:40:09.299573] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:28.746 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:29.004 nvme0n1 00:22:29.004 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.004 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:29.004 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.004 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:29.004 [ 00:22:29.004 { 00:22:29.004 "name": "nvme0n1", 00:22:29.004 "aliases": [ 00:22:29.004 "22402739-928a-4ec3-a89d-75052d570f72" 00:22:29.004 ], 00:22:29.004 "product_name": "NVMe disk", 00:22:29.004 "block_size": 512, 00:22:29.004 "num_blocks": 2097152, 00:22:29.004 "uuid": "22402739-928a-4ec3-a89d-75052d570f72", 00:22:29.004 "numa_id": 1, 00:22:29.004 "assigned_rate_limits": { 00:22:29.004 "rw_ios_per_sec": 0, 00:22:29.004 "rw_mbytes_per_sec": 0, 00:22:29.004 "r_mbytes_per_sec": 0, 00:22:29.004 "w_mbytes_per_sec": 0 00:22:29.004 }, 00:22:29.004 "claimed": false, 00:22:29.004 "zoned": false, 00:22:29.004 "supported_io_types": { 00:22:29.004 "read": true, 00:22:29.004 "write": true, 00:22:29.004 "unmap": false, 00:22:29.004 "flush": true, 00:22:29.004 "reset": true, 00:22:29.004 "nvme_admin": true, 00:22:29.004 "nvme_io": true, 00:22:29.004 "nvme_io_md": false, 00:22:29.004 "write_zeroes": true, 00:22:29.004 "zcopy": false, 00:22:29.004 "get_zone_info": false, 00:22:29.004 "zone_management": false, 00:22:29.004 "zone_append": false, 00:22:29.004 "compare": true, 00:22:29.004 "compare_and_write": true, 00:22:29.004 "abort": true, 00:22:29.004 "seek_hole": false, 00:22:29.004 "seek_data": false, 00:22:29.004 "copy": true, 00:22:29.004 "nvme_iov_md": false 00:22:29.004 }, 00:22:29.005 "memory_domains": [ 00:22:29.005 { 00:22:29.005 "dma_device_id": "system", 00:22:29.005 "dma_device_type": 1 00:22:29.005 } 00:22:29.005 ], 00:22:29.005 "driver_specific": { 00:22:29.005 "nvme": [ 00:22:29.005 { 00:22:29.005 "trid": { 00:22:29.005 "trtype": "TCP", 00:22:29.005 "adrfam": "IPv4", 00:22:29.005 "traddr": "10.0.0.2", 00:22:29.005 "trsvcid": "4420", 00:22:29.005 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:29.005 }, 00:22:29.005 "ctrlr_data": { 00:22:29.005 "cntlid": 1, 00:22:29.005 "vendor_id": "0x8086", 00:22:29.005 "model_number": "SPDK bdev Controller", 00:22:29.005 "serial_number": "00000000000000000000", 00:22:29.005 "firmware_revision": "25.01", 00:22:29.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:29.005 "oacs": { 00:22:29.005 "security": 0, 00:22:29.005 "format": 0, 00:22:29.005 "firmware": 0, 00:22:29.005 "ns_manage": 0 00:22:29.005 }, 00:22:29.005 "multi_ctrlr": true, 00:22:29.005 "ana_reporting": false 00:22:29.005 }, 00:22:29.005 "vs": { 00:22:29.005 "nvme_version": "1.3" 00:22:29.005 }, 00:22:29.005 "ns_data": { 00:22:29.005 "id": 1, 00:22:29.005 "can_share": true 00:22:29.005 } 00:22:29.005 } 00:22:29.005 ], 00:22:29.005 "mp_policy": "active_passive" 00:22:29.005 } 00:22:29.005 } 00:22:29.005 ] 00:22:29.005 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.005 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:29.005 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.005 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:29.005 [2024-11-20 10:40:09.564116] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:22:29.005 [2024-11-20 10:40:09.564177] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1e94220 (9): Bad file descriptor 00:22:29.005 [2024-11-20 10:40:09.698290] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 2] Resetting controller successful. 00:22:29.005 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.005 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:29.005 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.005 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:29.005 [ 00:22:29.005 { 00:22:29.005 "name": "nvme0n1", 00:22:29.005 "aliases": [ 00:22:29.005 "22402739-928a-4ec3-a89d-75052d570f72" 00:22:29.005 ], 00:22:29.005 "product_name": "NVMe disk", 00:22:29.005 "block_size": 512, 00:22:29.005 "num_blocks": 2097152, 00:22:29.005 "uuid": "22402739-928a-4ec3-a89d-75052d570f72", 00:22:29.005 "numa_id": 1, 00:22:29.005 "assigned_rate_limits": { 00:22:29.005 "rw_ios_per_sec": 0, 00:22:29.005 "rw_mbytes_per_sec": 0, 00:22:29.005 "r_mbytes_per_sec": 0, 00:22:29.005 "w_mbytes_per_sec": 0 00:22:29.005 }, 00:22:29.005 "claimed": false, 00:22:29.005 "zoned": false, 00:22:29.005 "supported_io_types": { 00:22:29.005 "read": true, 00:22:29.005 "write": true, 00:22:29.005 "unmap": false, 00:22:29.005 "flush": true, 00:22:29.005 "reset": true, 00:22:29.005 "nvme_admin": true, 00:22:29.005 "nvme_io": true, 00:22:29.005 "nvme_io_md": false, 00:22:29.005 "write_zeroes": true, 00:22:29.005 "zcopy": false, 00:22:29.005 "get_zone_info": false, 00:22:29.005 "zone_management": false, 00:22:29.005 "zone_append": false, 00:22:29.005 "compare": true, 00:22:29.005 "compare_and_write": true, 00:22:29.005 "abort": true, 00:22:29.005 "seek_hole": false, 00:22:29.005 "seek_data": false, 00:22:29.005 "copy": true, 00:22:29.005 "nvme_iov_md": false 00:22:29.005 }, 00:22:29.005 "memory_domains": [ 00:22:29.005 { 00:22:29.005 "dma_device_id": "system", 00:22:29.005 "dma_device_type": 1 00:22:29.005 } 00:22:29.005 ], 00:22:29.005 "driver_specific": { 00:22:29.005 "nvme": [ 00:22:29.005 { 00:22:29.005 "trid": { 00:22:29.005 "trtype": "TCP", 00:22:29.005 "adrfam": "IPv4", 00:22:29.005 "traddr": "10.0.0.2", 00:22:29.005 "trsvcid": "4420", 00:22:29.005 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:29.005 }, 00:22:29.005 "ctrlr_data": { 00:22:29.005 "cntlid": 2, 00:22:29.005 "vendor_id": "0x8086", 00:22:29.005 "model_number": "SPDK bdev Controller", 00:22:29.005 "serial_number": "00000000000000000000", 00:22:29.005 "firmware_revision": "25.01", 00:22:29.005 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:29.005 "oacs": { 00:22:29.005 "security": 0, 00:22:29.005 "format": 0, 00:22:29.005 "firmware": 0, 00:22:29.005 "ns_manage": 0 00:22:29.005 }, 00:22:29.005 "multi_ctrlr": true, 00:22:29.005 "ana_reporting": false 00:22:29.005 }, 00:22:29.005 "vs": { 00:22:29.005 "nvme_version": "1.3" 00:22:29.005 }, 00:22:29.005 "ns_data": { 00:22:29.005 "id": 1, 00:22:29.005 "can_share": true 00:22:29.005 } 00:22:29.005 } 00:22:29.005 ], 00:22:29.005 "mp_policy": "active_passive" 00:22:29.005 } 00:22:29.005 } 00:22:29.005 ] 00:22:29.005 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.005 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.005 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.005 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.TRK56d0vWo 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.TRK56d0vWo 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd keyring_file_add_key key0 /tmp/tmp.TRK56d0vWo 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@58 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:29.263 [2024-11-20 10:40:09.772753] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:29.263 [2024-11-20 10:40:09.772857] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@60 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk key0 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@66 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk key0 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:29.263 [2024-11-20 10:40:09.792819] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:29.263 nvme0n1 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@70 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.263 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:29.263 [ 00:22:29.263 { 00:22:29.263 "name": "nvme0n1", 00:22:29.263 "aliases": [ 00:22:29.263 "22402739-928a-4ec3-a89d-75052d570f72" 00:22:29.263 ], 00:22:29.263 "product_name": "NVMe disk", 00:22:29.263 "block_size": 512, 00:22:29.263 "num_blocks": 2097152, 00:22:29.263 "uuid": "22402739-928a-4ec3-a89d-75052d570f72", 00:22:29.263 "numa_id": 1, 00:22:29.263 "assigned_rate_limits": { 00:22:29.263 "rw_ios_per_sec": 0, 00:22:29.263 "rw_mbytes_per_sec": 0, 00:22:29.263 "r_mbytes_per_sec": 0, 00:22:29.263 "w_mbytes_per_sec": 0 00:22:29.263 }, 00:22:29.263 "claimed": false, 00:22:29.263 "zoned": false, 00:22:29.263 "supported_io_types": { 00:22:29.263 "read": true, 00:22:29.263 "write": true, 00:22:29.263 "unmap": false, 00:22:29.263 "flush": true, 00:22:29.263 "reset": true, 00:22:29.263 "nvme_admin": true, 00:22:29.263 "nvme_io": true, 00:22:29.263 "nvme_io_md": false, 00:22:29.263 "write_zeroes": true, 00:22:29.263 "zcopy": false, 00:22:29.263 "get_zone_info": false, 00:22:29.263 "zone_management": false, 00:22:29.263 "zone_append": false, 00:22:29.263 "compare": true, 00:22:29.263 "compare_and_write": true, 00:22:29.263 "abort": true, 00:22:29.263 "seek_hole": false, 00:22:29.263 "seek_data": false, 00:22:29.263 "copy": true, 00:22:29.263 "nvme_iov_md": false 00:22:29.263 }, 00:22:29.263 "memory_domains": [ 00:22:29.263 { 00:22:29.263 "dma_device_id": "system", 00:22:29.263 "dma_device_type": 1 00:22:29.263 } 00:22:29.263 ], 00:22:29.263 "driver_specific": { 00:22:29.263 "nvme": [ 00:22:29.263 { 00:22:29.263 "trid": { 00:22:29.263 "trtype": "TCP", 00:22:29.263 "adrfam": "IPv4", 00:22:29.263 "traddr": "10.0.0.2", 00:22:29.263 "trsvcid": "4421", 00:22:29.263 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:29.263 }, 00:22:29.264 "ctrlr_data": { 00:22:29.264 "cntlid": 3, 00:22:29.264 "vendor_id": "0x8086", 00:22:29.264 "model_number": "SPDK bdev Controller", 00:22:29.264 "serial_number": "00000000000000000000", 00:22:29.264 "firmware_revision": "25.01", 00:22:29.264 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:29.264 "oacs": { 00:22:29.264 "security": 0, 00:22:29.264 "format": 0, 00:22:29.264 "firmware": 0, 00:22:29.264 "ns_manage": 0 00:22:29.264 }, 00:22:29.264 "multi_ctrlr": true, 00:22:29.264 "ana_reporting": false 00:22:29.264 }, 00:22:29.264 "vs": { 00:22:29.264 "nvme_version": "1.3" 00:22:29.264 }, 00:22:29.264 "ns_data": { 00:22:29.264 "id": 1, 00:22:29.264 "can_share": true 00:22:29.264 } 00:22:29.264 } 00:22:29.264 ], 00:22:29.264 "mp_policy": "active_passive" 00:22:29.264 } 00:22:29.264 } 00:22:29.264 ] 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@73 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@76 -- # rm -f /tmp/tmp.TRK56d0vWo 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # trap - SIGINT SIGTERM EXIT 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@79 -- # nvmftestfini 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@99 -- # sync 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@102 -- # set +e 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:22:29.264 rmmod nvme_tcp 00:22:29.264 rmmod nvme_fabrics 00:22:29.264 rmmod nvme_keyring 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@106 -- # set -e 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@107 -- # return 0 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@336 -- # '[' -n 3300424 ']' 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@337 -- # killprocess 3300424 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # '[' -z 3300424 ']' 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # kill -0 3300424 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # uname 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:29.264 10:40:09 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3300424 00:22:29.523 10:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:29.523 10:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:29.523 10:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3300424' 00:22:29.523 killing process with pid 3300424 00:22:29.523 10:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@973 -- # kill 3300424 00:22:29.523 10:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@978 -- # wait 3300424 00:22:29.523 10:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:29.523 10:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # nvmf_fini 00:22:29.523 10:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@264 -- # local dev 00:22:29.523 10:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@267 -- # remove_target_ns 00:22:29.523 10:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:29.523 10:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:29.523 10:40:10 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@268 -- # delete_main_bridge 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@130 -- # return 0 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # _dev=0 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@41 -- # dev_map=() 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/setup.sh@284 -- # iptr 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@542 -- # iptables-save 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@542 -- # iptables-restore 00:22:32.058 00:22:32.058 real 0m9.546s 00:22:32.058 user 0m3.051s 00:22:32.058 sys 0m4.934s 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:32.058 ************************************ 00:22:32.058 END TEST nvmf_async_init 00:22:32.058 ************************************ 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@20 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:32.058 ************************************ 00:22:32.058 START TEST nvmf_identify 00:22:32.058 ************************************ 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:32.058 * Looking for test storage... 00:22:32.058 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lcov --version 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:32.058 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # IFS=.-: 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@336 -- # read -ra ver1 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # IFS=.-: 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@337 -- # read -ra ver2 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@338 -- # local 'op=<' 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@340 -- # ver1_l=2 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@341 -- # ver2_l=1 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@344 -- # case "$op" in 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@345 -- # : 1 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # decimal 1 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=1 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 1 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@365 -- # ver1[v]=1 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # decimal 2 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@353 -- # local d=2 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@355 -- # echo 2 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@366 -- # ver2[v]=2 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@368 -- # return 0 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:32.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.059 --rc genhtml_branch_coverage=1 00:22:32.059 --rc genhtml_function_coverage=1 00:22:32.059 --rc genhtml_legend=1 00:22:32.059 --rc geninfo_all_blocks=1 00:22:32.059 --rc geninfo_unexecuted_blocks=1 00:22:32.059 00:22:32.059 ' 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:32.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.059 --rc genhtml_branch_coverage=1 00:22:32.059 --rc genhtml_function_coverage=1 00:22:32.059 --rc genhtml_legend=1 00:22:32.059 --rc geninfo_all_blocks=1 00:22:32.059 --rc geninfo_unexecuted_blocks=1 00:22:32.059 00:22:32.059 ' 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:32.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.059 --rc genhtml_branch_coverage=1 00:22:32.059 --rc genhtml_function_coverage=1 00:22:32.059 --rc genhtml_legend=1 00:22:32.059 --rc geninfo_all_blocks=1 00:22:32.059 --rc geninfo_unexecuted_blocks=1 00:22:32.059 00:22:32.059 ' 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:32.059 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:32.059 --rc genhtml_branch_coverage=1 00:22:32.059 --rc genhtml_function_coverage=1 00:22:32.059 --rc genhtml_legend=1 00:22:32.059 --rc geninfo_all_blocks=1 00:22:32.059 --rc geninfo_unexecuted_blocks=1 00:22:32.059 00:22:32.059 ' 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@15 -- # shopt -s extglob 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:22:32.059 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@50 -- # : 0 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:32.060 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # remove_target_ns 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # xtrace_disable 00:22:32.060 10:40:12 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.625 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:38.625 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@131 -- # pci_devs=() 00:22:38.625 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:38.625 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:38.625 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:38.625 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:38.625 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:38.625 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@135 -- # net_devs=() 00:22:38.625 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:38.625 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@136 -- # e810=() 00:22:38.625 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@136 -- # local -ga e810 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@137 -- # x722=() 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@137 -- # local -ga x722 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@138 -- # mlx=() 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@138 -- # local -ga mlx 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:38.626 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:38.626 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:38.626 Found net devices under 0000:86:00.0: cvl_0_0 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:38.626 Found net devices under 0000:86:00.1: cvl_0_1 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@262 -- # is_hw=yes 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@257 -- # create_target_ns 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@28 -- # local -g _dev 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # ips=() 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772161 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:22:38.626 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:22:38.626 10.0.0.1 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@11 -- # local val=167772162 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:22:38.627 10.0.0.2 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@38 -- # ping_ips 1 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:38.627 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:38.627 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.414 ms 00:22:38.627 00:22:38.627 --- 10.0.0.1 ping statistics --- 00:22:38.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.627 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target0 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:22:38.627 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:38.627 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.154 ms 00:22:38.627 00:22:38.627 --- 10.0.0.2 ping statistics --- 00:22:38.627 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:38.627 rtt min/avg/max/mdev = 0.154/0.154/0.154/0.000 ms 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair++ )) 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # return 0 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:38.627 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=initiator1 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # return 1 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev= 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@169 -- # return 0 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target0 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # get_net_dev target1 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@107 -- # local dev=target1 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@109 -- # return 1 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@168 -- # dev= 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@169 -- # return 0 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=3304214 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 3304214 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@835 -- # '[' -z 3304214 ']' 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:38.628 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:38.628 10:40:18 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.628 [2024-11-20 10:40:18.657293] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:22:38.628 [2024-11-20 10:40:18.657336] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:38.628 [2024-11-20 10:40:18.735418] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:38.628 [2024-11-20 10:40:18.778039] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:38.628 [2024-11-20 10:40:18.778074] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:38.628 [2024-11-20 10:40:18.778082] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:38.628 [2024-11-20 10:40:18.778088] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:38.628 [2024-11-20 10:40:18.778093] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:38.628 [2024-11-20 10:40:18.779657] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:38.628 [2024-11-20 10:40:18.779745] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:38.628 [2024-11-20 10:40:18.779855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:38.628 [2024-11-20 10:40:18.779855] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@868 -- # return 0 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.887 [2024-11-20 10:40:19.501481] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.887 Malloc0 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.887 [2024-11-20 10:40:19.599467] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:38.887 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:39.148 [ 00:22:39.148 { 00:22:39.148 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:39.148 "subtype": "Discovery", 00:22:39.148 "listen_addresses": [ 00:22:39.148 { 00:22:39.149 "trtype": "TCP", 00:22:39.149 "adrfam": "IPv4", 00:22:39.149 "traddr": "10.0.0.2", 00:22:39.149 "trsvcid": "4420" 00:22:39.149 } 00:22:39.149 ], 00:22:39.149 "allow_any_host": true, 00:22:39.149 "hosts": [] 00:22:39.149 }, 00:22:39.149 { 00:22:39.149 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:39.149 "subtype": "NVMe", 00:22:39.149 "listen_addresses": [ 00:22:39.149 { 00:22:39.149 "trtype": "TCP", 00:22:39.149 "adrfam": "IPv4", 00:22:39.149 "traddr": "10.0.0.2", 00:22:39.149 "trsvcid": "4420" 00:22:39.149 } 00:22:39.149 ], 00:22:39.149 "allow_any_host": true, 00:22:39.149 "hosts": [], 00:22:39.149 "serial_number": "SPDK00000000000001", 00:22:39.149 "model_number": "SPDK bdev Controller", 00:22:39.149 "max_namespaces": 32, 00:22:39.149 "min_cntlid": 1, 00:22:39.149 "max_cntlid": 65519, 00:22:39.149 "namespaces": [ 00:22:39.149 { 00:22:39.149 "nsid": 1, 00:22:39.149 "bdev_name": "Malloc0", 00:22:39.149 "name": "Malloc0", 00:22:39.149 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:39.149 "eui64": "ABCDEF0123456789", 00:22:39.149 "uuid": "f78731cb-1c09-4753-a6c6-4058bdfdb15c" 00:22:39.149 } 00:22:39.149 ] 00:22:39.149 } 00:22:39.149 ] 00:22:39.149 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.149 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:39.149 [2024-11-20 10:40:19.655025] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:22:39.149 [2024-11-20 10:40:19.655056] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3304353 ] 00:22:39.149 [2024-11-20 10:40:19.692970] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to connect adminq (no timeout) 00:22:39.149 [2024-11-20 10:40:19.693018] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:39.149 [2024-11-20 10:40:19.693023] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:39.149 [2024-11-20 10:40:19.693034] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:39.149 [2024-11-20 10:40:19.693044] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:39.149 [2024-11-20 10:40:19.700483] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 0] setting state to wait for connect adminq (no timeout) 00:22:39.149 [2024-11-20 10:40:19.700519] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x11d9690 0 00:22:39.149 [2024-11-20 10:40:19.700631] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:39.149 [2024-11-20 10:40:19.700639] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:39.149 [2024-11-20 10:40:19.700646] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:39.149 [2024-11-20 10:40:19.700649] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:39.149 [2024-11-20 10:40:19.700677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.149 [2024-11-20 10:40:19.700682] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.149 [2024-11-20 10:40:19.700686] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d9690) 00:22:39.149 [2024-11-20 10:40:19.700699] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:39.149 [2024-11-20 10:40:19.700712] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b100, cid 0, qid 0 00:22:39.149 [2024-11-20 10:40:19.708213] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.149 [2024-11-20 10:40:19.708223] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.149 [2024-11-20 10:40:19.708226] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.149 [2024-11-20 10:40:19.708230] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b100) on tqpair=0x11d9690 00:22:39.149 [2024-11-20 10:40:19.708242] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:39.149 [2024-11-20 10:40:19.708248] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs (no timeout) 00:22:39.149 [2024-11-20 10:40:19.708253] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read vs wait for vs (no timeout) 00:22:39.149 [2024-11-20 10:40:19.708266] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.149 [2024-11-20 10:40:19.708270] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.149 [2024-11-20 10:40:19.708274] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d9690) 00:22:39.149 [2024-11-20 10:40:19.708281] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.149 [2024-11-20 10:40:19.708293] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b100, cid 0, qid 0 00:22:39.149 [2024-11-20 10:40:19.708450] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.149 [2024-11-20 10:40:19.708456] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.149 [2024-11-20 10:40:19.708459] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.149 [2024-11-20 10:40:19.708462] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b100) on tqpair=0x11d9690 00:22:39.149 [2024-11-20 10:40:19.708467] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap (no timeout) 00:22:39.149 [2024-11-20 10:40:19.708473] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to read cap wait for cap (no timeout) 00:22:39.149 [2024-11-20 10:40:19.708480] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.149 [2024-11-20 10:40:19.708484] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.149 [2024-11-20 10:40:19.708487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d9690) 00:22:39.149 [2024-11-20 10:40:19.708492] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.149 [2024-11-20 10:40:19.708502] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b100, cid 0, qid 0 00:22:39.149 [2024-11-20 10:40:19.708566] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.149 [2024-11-20 10:40:19.708571] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.149 [2024-11-20 10:40:19.708574] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.149 [2024-11-20 10:40:19.708577] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b100) on tqpair=0x11d9690 00:22:39.149 [2024-11-20 10:40:19.708582] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en (no timeout) 00:22:39.149 [2024-11-20 10:40:19.708592] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:39.149 [2024-11-20 10:40:19.708598] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.149 [2024-11-20 10:40:19.708601] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.149 [2024-11-20 10:40:19.708604] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d9690) 00:22:39.149 [2024-11-20 10:40:19.708610] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.149 [2024-11-20 10:40:19.708619] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b100, cid 0, qid 0 00:22:39.149 [2024-11-20 10:40:19.708682] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.149 [2024-11-20 10:40:19.708688] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.149 [2024-11-20 10:40:19.708691] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.149 [2024-11-20 10:40:19.708695] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b100) on tqpair=0x11d9690 00:22:39.149 [2024-11-20 10:40:19.708699] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:39.149 [2024-11-20 10:40:19.708708] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.149 [2024-11-20 10:40:19.708711] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.149 [2024-11-20 10:40:19.708714] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d9690) 00:22:39.149 [2024-11-20 10:40:19.708720] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.149 [2024-11-20 10:40:19.708729] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b100, cid 0, qid 0 00:22:39.149 [2024-11-20 10:40:19.708799] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.149 [2024-11-20 10:40:19.708805] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.149 [2024-11-20 10:40:19.708808] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.149 [2024-11-20 10:40:19.708811] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b100) on tqpair=0x11d9690 00:22:39.149 [2024-11-20 10:40:19.708815] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:39.149 [2024-11-20 10:40:19.708820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:39.149 [2024-11-20 10:40:19.708826] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:39.149 [2024-11-20 10:40:19.708934] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Setting CC.EN = 1 00:22:39.149 [2024-11-20 10:40:19.708938] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:39.149 [2024-11-20 10:40:19.708946] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.149 [2024-11-20 10:40:19.708949] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.149 [2024-11-20 10:40:19.708952] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d9690) 00:22:39.149 [2024-11-20 10:40:19.708958] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.149 [2024-11-20 10:40:19.708968] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b100, cid 0, qid 0 00:22:39.149 [2024-11-20 10:40:19.709054] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.149 [2024-11-20 10:40:19.709059] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.149 [2024-11-20 10:40:19.709062] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.149 [2024-11-20 10:40:19.709068] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b100) on tqpair=0x11d9690 00:22:39.150 [2024-11-20 10:40:19.709072] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:39.150 [2024-11-20 10:40:19.709080] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709084] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709087] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d9690) 00:22:39.150 [2024-11-20 10:40:19.709092] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.150 [2024-11-20 10:40:19.709102] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b100, cid 0, qid 0 00:22:39.150 [2024-11-20 10:40:19.709168] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.150 [2024-11-20 10:40:19.709173] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.150 [2024-11-20 10:40:19.709176] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709180] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b100) on tqpair=0x11d9690 00:22:39.150 [2024-11-20 10:40:19.709184] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:39.150 [2024-11-20 10:40:19.709188] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:39.150 [2024-11-20 10:40:19.709195] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to identify controller (no timeout) 00:22:39.150 [2024-11-20 10:40:19.709210] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:39.150 [2024-11-20 10:40:19.709218] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709221] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d9690) 00:22:39.150 [2024-11-20 10:40:19.709227] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.150 [2024-11-20 10:40:19.709237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b100, cid 0, qid 0 00:22:39.150 [2024-11-20 10:40:19.709335] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:39.150 [2024-11-20 10:40:19.709340] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:39.150 [2024-11-20 10:40:19.709344] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709348] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11d9690): datao=0, datal=4096, cccid=0 00:22:39.150 [2024-11-20 10:40:19.709351] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x123b100) on tqpair(0x11d9690): expected_datao=0, payload_size=4096 00:22:39.150 [2024-11-20 10:40:19.709356] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709362] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709366] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709402] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.150 [2024-11-20 10:40:19.709407] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.150 [2024-11-20 10:40:19.709410] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709414] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b100) on tqpair=0x11d9690 00:22:39.150 [2024-11-20 10:40:19.709420] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_xfer_size 4294967295 00:22:39.150 [2024-11-20 10:40:19.709425] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] MDTS max_xfer_size 131072 00:22:39.150 [2024-11-20 10:40:19.709431] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] CNTLID 0x0001 00:22:39.150 [2024-11-20 10:40:19.709438] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] transport max_sges 16 00:22:39.150 [2024-11-20 10:40:19.709443] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] fuses compare and write: 1 00:22:39.150 [2024-11-20 10:40:19.709447] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to configure AER (timeout 30000 ms) 00:22:39.150 [2024-11-20 10:40:19.709458] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:39.150 [2024-11-20 10:40:19.709464] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709467] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709470] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d9690) 00:22:39.150 [2024-11-20 10:40:19.709477] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:39.150 [2024-11-20 10:40:19.709487] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b100, cid 0, qid 0 00:22:39.150 [2024-11-20 10:40:19.709570] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.150 [2024-11-20 10:40:19.709575] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.150 [2024-11-20 10:40:19.709578] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709582] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b100) on tqpair=0x11d9690 00:22:39.150 [2024-11-20 10:40:19.709588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709592] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709595] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x11d9690) 00:22:39.150 [2024-11-20 10:40:19.709600] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.150 [2024-11-20 10:40:19.709605] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709609] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709612] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x11d9690) 00:22:39.150 [2024-11-20 10:40:19.709617] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.150 [2024-11-20 10:40:19.709621] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709625] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709628] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x11d9690) 00:22:39.150 [2024-11-20 10:40:19.709633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.150 [2024-11-20 10:40:19.709638] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709641] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709644] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.150 [2024-11-20 10:40:19.709649] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.150 [2024-11-20 10:40:19.709653] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:39.150 [2024-11-20 10:40:19.709661] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:39.150 [2024-11-20 10:40:19.709668] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709672] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11d9690) 00:22:39.150 [2024-11-20 10:40:19.709677] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.150 [2024-11-20 10:40:19.709688] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b100, cid 0, qid 0 00:22:39.150 [2024-11-20 10:40:19.709692] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b280, cid 1, qid 0 00:22:39.150 [2024-11-20 10:40:19.709696] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b400, cid 2, qid 0 00:22:39.150 [2024-11-20 10:40:19.709700] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.150 [2024-11-20 10:40:19.709704] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b700, cid 4, qid 0 00:22:39.150 [2024-11-20 10:40:19.709806] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.150 [2024-11-20 10:40:19.709812] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.150 [2024-11-20 10:40:19.709815] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709818] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b700) on tqpair=0x11d9690 00:22:39.150 [2024-11-20 10:40:19.709825] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Sending keep alive every 5000000 us 00:22:39.150 [2024-11-20 10:40:19.709830] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] setting state to ready (no timeout) 00:22:39.150 [2024-11-20 10:40:19.709838] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709842] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11d9690) 00:22:39.150 [2024-11-20 10:40:19.709848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.150 [2024-11-20 10:40:19.709857] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b700, cid 4, qid 0 00:22:39.150 [2024-11-20 10:40:19.709935] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:39.150 [2024-11-20 10:40:19.709941] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:39.150 [2024-11-20 10:40:19.709944] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709947] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11d9690): datao=0, datal=4096, cccid=4 00:22:39.150 [2024-11-20 10:40:19.709950] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x123b700) on tqpair(0x11d9690): expected_datao=0, payload_size=4096 00:22:39.150 [2024-11-20 10:40:19.709955] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709965] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.709968] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.750354] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.150 [2024-11-20 10:40:19.750367] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.150 [2024-11-20 10:40:19.750371] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.750374] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b700) on tqpair=0x11d9690 00:22:39.150 [2024-11-20 10:40:19.750389] nvme_ctrlr.c:4202:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Ctrlr already in ready state 00:22:39.150 [2024-11-20 10:40:19.750413] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.150 [2024-11-20 10:40:19.750417] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11d9690) 00:22:39.151 [2024-11-20 10:40:19.750425] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.151 [2024-11-20 10:40:19.750434] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.750438] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.750441] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x11d9690) 00:22:39.151 [2024-11-20 10:40:19.750446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.151 [2024-11-20 10:40:19.750461] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b700, cid 4, qid 0 00:22:39.151 [2024-11-20 10:40:19.750466] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b880, cid 5, qid 0 00:22:39.151 [2024-11-20 10:40:19.750568] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:39.151 [2024-11-20 10:40:19.750574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:39.151 [2024-11-20 10:40:19.750577] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.750580] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11d9690): datao=0, datal=1024, cccid=4 00:22:39.151 [2024-11-20 10:40:19.750584] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x123b700) on tqpair(0x11d9690): expected_datao=0, payload_size=1024 00:22:39.151 [2024-11-20 10:40:19.750588] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.750594] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.750597] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.750602] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.151 [2024-11-20 10:40:19.750607] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.151 [2024-11-20 10:40:19.750609] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.750613] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b880) on tqpair=0x11d9690 00:22:39.151 [2024-11-20 10:40:19.792378] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.151 [2024-11-20 10:40:19.792390] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.151 [2024-11-20 10:40:19.792393] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.792397] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b700) on tqpair=0x11d9690 00:22:39.151 [2024-11-20 10:40:19.792409] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.792413] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11d9690) 00:22:39.151 [2024-11-20 10:40:19.792420] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.151 [2024-11-20 10:40:19.792438] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b700, cid 4, qid 0 00:22:39.151 [2024-11-20 10:40:19.792545] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:39.151 [2024-11-20 10:40:19.792552] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:39.151 [2024-11-20 10:40:19.792556] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.792559] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11d9690): datao=0, datal=3072, cccid=4 00:22:39.151 [2024-11-20 10:40:19.792563] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x123b700) on tqpair(0x11d9690): expected_datao=0, payload_size=3072 00:22:39.151 [2024-11-20 10:40:19.792568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.792574] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.792578] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.792594] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.151 [2024-11-20 10:40:19.792600] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.151 [2024-11-20 10:40:19.792603] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.792609] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b700) on tqpair=0x11d9690 00:22:39.151 [2024-11-20 10:40:19.792618] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.792622] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x11d9690) 00:22:39.151 [2024-11-20 10:40:19.792628] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.151 [2024-11-20 10:40:19.792642] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b700, cid 4, qid 0 00:22:39.151 [2024-11-20 10:40:19.792715] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:39.151 [2024-11-20 10:40:19.792721] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:39.151 [2024-11-20 10:40:19.792725] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.792728] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x11d9690): datao=0, datal=8, cccid=4 00:22:39.151 [2024-11-20 10:40:19.792733] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x123b700) on tqpair(0x11d9690): expected_datao=0, payload_size=8 00:22:39.151 [2024-11-20 10:40:19.792736] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.792742] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.792746] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.833371] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.151 [2024-11-20 10:40:19.833384] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.151 [2024-11-20 10:40:19.833387] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.151 [2024-11-20 10:40:19.833391] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b700) on tqpair=0x11d9690 00:22:39.151 ===================================================== 00:22:39.151 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:39.151 ===================================================== 00:22:39.151 Controller Capabilities/Features 00:22:39.151 ================================ 00:22:39.151 Vendor ID: 0000 00:22:39.151 Subsystem Vendor ID: 0000 00:22:39.151 Serial Number: .................... 00:22:39.151 Model Number: ........................................ 00:22:39.151 Firmware Version: 25.01 00:22:39.151 Recommended Arb Burst: 0 00:22:39.151 IEEE OUI Identifier: 00 00 00 00:22:39.151 Multi-path I/O 00:22:39.151 May have multiple subsystem ports: No 00:22:39.151 May have multiple controllers: No 00:22:39.151 Associated with SR-IOV VF: No 00:22:39.151 Max Data Transfer Size: 131072 00:22:39.151 Max Number of Namespaces: 0 00:22:39.151 Max Number of I/O Queues: 1024 00:22:39.151 NVMe Specification Version (VS): 1.3 00:22:39.151 NVMe Specification Version (Identify): 1.3 00:22:39.151 Maximum Queue Entries: 128 00:22:39.151 Contiguous Queues Required: Yes 00:22:39.151 Arbitration Mechanisms Supported 00:22:39.151 Weighted Round Robin: Not Supported 00:22:39.151 Vendor Specific: Not Supported 00:22:39.151 Reset Timeout: 15000 ms 00:22:39.151 Doorbell Stride: 4 bytes 00:22:39.151 NVM Subsystem Reset: Not Supported 00:22:39.151 Command Sets Supported 00:22:39.151 NVM Command Set: Supported 00:22:39.151 Boot Partition: Not Supported 00:22:39.151 Memory Page Size Minimum: 4096 bytes 00:22:39.151 Memory Page Size Maximum: 4096 bytes 00:22:39.151 Persistent Memory Region: Not Supported 00:22:39.151 Optional Asynchronous Events Supported 00:22:39.151 Namespace Attribute Notices: Not Supported 00:22:39.151 Firmware Activation Notices: Not Supported 00:22:39.151 ANA Change Notices: Not Supported 00:22:39.151 PLE Aggregate Log Change Notices: Not Supported 00:22:39.151 LBA Status Info Alert Notices: Not Supported 00:22:39.151 EGE Aggregate Log Change Notices: Not Supported 00:22:39.151 Normal NVM Subsystem Shutdown event: Not Supported 00:22:39.151 Zone Descriptor Change Notices: Not Supported 00:22:39.151 Discovery Log Change Notices: Supported 00:22:39.151 Controller Attributes 00:22:39.151 128-bit Host Identifier: Not Supported 00:22:39.151 Non-Operational Permissive Mode: Not Supported 00:22:39.151 NVM Sets: Not Supported 00:22:39.151 Read Recovery Levels: Not Supported 00:22:39.151 Endurance Groups: Not Supported 00:22:39.151 Predictable Latency Mode: Not Supported 00:22:39.151 Traffic Based Keep ALive: Not Supported 00:22:39.151 Namespace Granularity: Not Supported 00:22:39.151 SQ Associations: Not Supported 00:22:39.151 UUID List: Not Supported 00:22:39.151 Multi-Domain Subsystem: Not Supported 00:22:39.151 Fixed Capacity Management: Not Supported 00:22:39.151 Variable Capacity Management: Not Supported 00:22:39.151 Delete Endurance Group: Not Supported 00:22:39.151 Delete NVM Set: Not Supported 00:22:39.151 Extended LBA Formats Supported: Not Supported 00:22:39.151 Flexible Data Placement Supported: Not Supported 00:22:39.151 00:22:39.151 Controller Memory Buffer Support 00:22:39.151 ================================ 00:22:39.151 Supported: No 00:22:39.151 00:22:39.151 Persistent Memory Region Support 00:22:39.151 ================================ 00:22:39.151 Supported: No 00:22:39.151 00:22:39.151 Admin Command Set Attributes 00:22:39.151 ============================ 00:22:39.151 Security Send/Receive: Not Supported 00:22:39.152 Format NVM: Not Supported 00:22:39.152 Firmware Activate/Download: Not Supported 00:22:39.152 Namespace Management: Not Supported 00:22:39.152 Device Self-Test: Not Supported 00:22:39.152 Directives: Not Supported 00:22:39.152 NVMe-MI: Not Supported 00:22:39.152 Virtualization Management: Not Supported 00:22:39.152 Doorbell Buffer Config: Not Supported 00:22:39.152 Get LBA Status Capability: Not Supported 00:22:39.152 Command & Feature Lockdown Capability: Not Supported 00:22:39.152 Abort Command Limit: 1 00:22:39.152 Async Event Request Limit: 4 00:22:39.152 Number of Firmware Slots: N/A 00:22:39.152 Firmware Slot 1 Read-Only: N/A 00:22:39.152 Firmware Activation Without Reset: N/A 00:22:39.152 Multiple Update Detection Support: N/A 00:22:39.152 Firmware Update Granularity: No Information Provided 00:22:39.152 Per-Namespace SMART Log: No 00:22:39.152 Asymmetric Namespace Access Log Page: Not Supported 00:22:39.152 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:39.152 Command Effects Log Page: Not Supported 00:22:39.152 Get Log Page Extended Data: Supported 00:22:39.152 Telemetry Log Pages: Not Supported 00:22:39.152 Persistent Event Log Pages: Not Supported 00:22:39.152 Supported Log Pages Log Page: May Support 00:22:39.152 Commands Supported & Effects Log Page: Not Supported 00:22:39.152 Feature Identifiers & Effects Log Page:May Support 00:22:39.152 NVMe-MI Commands & Effects Log Page: May Support 00:22:39.152 Data Area 4 for Telemetry Log: Not Supported 00:22:39.152 Error Log Page Entries Supported: 128 00:22:39.152 Keep Alive: Not Supported 00:22:39.152 00:22:39.152 NVM Command Set Attributes 00:22:39.152 ========================== 00:22:39.152 Submission Queue Entry Size 00:22:39.152 Max: 1 00:22:39.152 Min: 1 00:22:39.152 Completion Queue Entry Size 00:22:39.152 Max: 1 00:22:39.152 Min: 1 00:22:39.152 Number of Namespaces: 0 00:22:39.152 Compare Command: Not Supported 00:22:39.152 Write Uncorrectable Command: Not Supported 00:22:39.152 Dataset Management Command: Not Supported 00:22:39.152 Write Zeroes Command: Not Supported 00:22:39.152 Set Features Save Field: Not Supported 00:22:39.152 Reservations: Not Supported 00:22:39.152 Timestamp: Not Supported 00:22:39.152 Copy: Not Supported 00:22:39.152 Volatile Write Cache: Not Present 00:22:39.152 Atomic Write Unit (Normal): 1 00:22:39.152 Atomic Write Unit (PFail): 1 00:22:39.152 Atomic Compare & Write Unit: 1 00:22:39.152 Fused Compare & Write: Supported 00:22:39.152 Scatter-Gather List 00:22:39.152 SGL Command Set: Supported 00:22:39.152 SGL Keyed: Supported 00:22:39.152 SGL Bit Bucket Descriptor: Not Supported 00:22:39.152 SGL Metadata Pointer: Not Supported 00:22:39.152 Oversized SGL: Not Supported 00:22:39.152 SGL Metadata Address: Not Supported 00:22:39.152 SGL Offset: Supported 00:22:39.152 Transport SGL Data Block: Not Supported 00:22:39.152 Replay Protected Memory Block: Not Supported 00:22:39.152 00:22:39.152 Firmware Slot Information 00:22:39.152 ========================= 00:22:39.152 Active slot: 0 00:22:39.152 00:22:39.152 00:22:39.152 Error Log 00:22:39.152 ========= 00:22:39.152 00:22:39.152 Active Namespaces 00:22:39.152 ================= 00:22:39.152 Discovery Log Page 00:22:39.152 ================== 00:22:39.152 Generation Counter: 2 00:22:39.152 Number of Records: 2 00:22:39.152 Record Format: 0 00:22:39.152 00:22:39.152 Discovery Log Entry 0 00:22:39.152 ---------------------- 00:22:39.152 Transport Type: 3 (TCP) 00:22:39.152 Address Family: 1 (IPv4) 00:22:39.152 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:39.152 Entry Flags: 00:22:39.152 Duplicate Returned Information: 1 00:22:39.152 Explicit Persistent Connection Support for Discovery: 1 00:22:39.152 Transport Requirements: 00:22:39.152 Secure Channel: Not Required 00:22:39.152 Port ID: 0 (0x0000) 00:22:39.152 Controller ID: 65535 (0xffff) 00:22:39.152 Admin Max SQ Size: 128 00:22:39.152 Transport Service Identifier: 4420 00:22:39.152 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:39.152 Transport Address: 10.0.0.2 00:22:39.152 Discovery Log Entry 1 00:22:39.152 ---------------------- 00:22:39.152 Transport Type: 3 (TCP) 00:22:39.152 Address Family: 1 (IPv4) 00:22:39.152 Subsystem Type: 2 (NVM Subsystem) 00:22:39.152 Entry Flags: 00:22:39.152 Duplicate Returned Information: 0 00:22:39.152 Explicit Persistent Connection Support for Discovery: 0 00:22:39.152 Transport Requirements: 00:22:39.152 Secure Channel: Not Required 00:22:39.152 Port ID: 0 (0x0000) 00:22:39.152 Controller ID: 65535 (0xffff) 00:22:39.152 Admin Max SQ Size: 128 00:22:39.152 Transport Service Identifier: 4420 00:22:39.152 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:39.152 Transport Address: 10.0.0.2 [2024-11-20 10:40:19.833475] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] Prepare to destruct SSD 00:22:39.152 [2024-11-20 10:40:19.833487] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b100) on tqpair=0x11d9690 00:22:39.152 [2024-11-20 10:40:19.833494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.152 [2024-11-20 10:40:19.833499] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b280) on tqpair=0x11d9690 00:22:39.152 [2024-11-20 10:40:19.833503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.152 [2024-11-20 10:40:19.833507] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b400) on tqpair=0x11d9690 00:22:39.152 [2024-11-20 10:40:19.833511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.152 [2024-11-20 10:40:19.833515] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.152 [2024-11-20 10:40:19.833519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.152 [2024-11-20 10:40:19.833529] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.152 [2024-11-20 10:40:19.833533] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.152 [2024-11-20 10:40:19.833536] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.152 [2024-11-20 10:40:19.833544] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.152 [2024-11-20 10:40:19.833558] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.152 [2024-11-20 10:40:19.833621] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.152 [2024-11-20 10:40:19.833626] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.152 [2024-11-20 10:40:19.833630] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.152 [2024-11-20 10:40:19.833635] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.152 [2024-11-20 10:40:19.833641] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.152 [2024-11-20 10:40:19.833645] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.152 [2024-11-20 10:40:19.833648] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.152 [2024-11-20 10:40:19.833654] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.152 [2024-11-20 10:40:19.833666] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.152 [2024-11-20 10:40:19.833745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.152 [2024-11-20 10:40:19.833750] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.152 [2024-11-20 10:40:19.833753] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.152 [2024-11-20 10:40:19.833757] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.152 [2024-11-20 10:40:19.833761] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] RTD3E = 0 us 00:22:39.152 [2024-11-20 10:40:19.833765] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown timeout = 10000 ms 00:22:39.152 [2024-11-20 10:40:19.833773] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.152 [2024-11-20 10:40:19.833777] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.152 [2024-11-20 10:40:19.833780] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.152 [2024-11-20 10:40:19.833786] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.152 [2024-11-20 10:40:19.833796] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.152 [2024-11-20 10:40:19.833856] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.152 [2024-11-20 10:40:19.833861] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.152 [2024-11-20 10:40:19.833864] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.152 [2024-11-20 10:40:19.833868] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.152 [2024-11-20 10:40:19.833876] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.152 [2024-11-20 10:40:19.833880] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.833883] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.153 [2024-11-20 10:40:19.833889] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.153 [2024-11-20 10:40:19.833898] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.153 [2024-11-20 10:40:19.833959] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.153 [2024-11-20 10:40:19.833965] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.153 [2024-11-20 10:40:19.833968] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.833971] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.153 [2024-11-20 10:40:19.833979] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.833982] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.833985] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.153 [2024-11-20 10:40:19.833991] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.153 [2024-11-20 10:40:19.834000] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.153 [2024-11-20 10:40:19.834060] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.153 [2024-11-20 10:40:19.834068] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.153 [2024-11-20 10:40:19.834071] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834074] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.153 [2024-11-20 10:40:19.834082] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834086] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834089] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.153 [2024-11-20 10:40:19.834094] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.153 [2024-11-20 10:40:19.834104] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.153 [2024-11-20 10:40:19.834181] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.153 [2024-11-20 10:40:19.834186] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.153 [2024-11-20 10:40:19.834189] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834193] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.153 [2024-11-20 10:40:19.834206] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834210] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834213] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.153 [2024-11-20 10:40:19.834219] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.153 [2024-11-20 10:40:19.834229] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.153 [2024-11-20 10:40:19.834291] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.153 [2024-11-20 10:40:19.834296] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.153 [2024-11-20 10:40:19.834299] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834303] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.153 [2024-11-20 10:40:19.834311] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834314] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834317] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.153 [2024-11-20 10:40:19.834323] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.153 [2024-11-20 10:40:19.834332] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.153 [2024-11-20 10:40:19.834396] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.153 [2024-11-20 10:40:19.834401] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.153 [2024-11-20 10:40:19.834404] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834407] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.153 [2024-11-20 10:40:19.834416] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834419] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834422] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.153 [2024-11-20 10:40:19.834428] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.153 [2024-11-20 10:40:19.834437] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.153 [2024-11-20 10:40:19.834498] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.153 [2024-11-20 10:40:19.834504] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.153 [2024-11-20 10:40:19.834509] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834512] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.153 [2024-11-20 10:40:19.834521] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834524] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834527] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.153 [2024-11-20 10:40:19.834533] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.153 [2024-11-20 10:40:19.834543] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.153 [2024-11-20 10:40:19.834609] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.153 [2024-11-20 10:40:19.834615] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.153 [2024-11-20 10:40:19.834618] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834621] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.153 [2024-11-20 10:40:19.834629] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834633] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834636] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.153 [2024-11-20 10:40:19.834642] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.153 [2024-11-20 10:40:19.834651] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.153 [2024-11-20 10:40:19.834711] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.153 [2024-11-20 10:40:19.834716] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.153 [2024-11-20 10:40:19.834719] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834722] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.153 [2024-11-20 10:40:19.834730] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834734] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834737] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.153 [2024-11-20 10:40:19.834742] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.153 [2024-11-20 10:40:19.834752] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.153 [2024-11-20 10:40:19.834817] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.153 [2024-11-20 10:40:19.834822] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.153 [2024-11-20 10:40:19.834825] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834829] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.153 [2024-11-20 10:40:19.834837] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834840] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834843] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.153 [2024-11-20 10:40:19.834849] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.153 [2024-11-20 10:40:19.834858] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.153 [2024-11-20 10:40:19.834919] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.153 [2024-11-20 10:40:19.834924] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.153 [2024-11-20 10:40:19.834928] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834933] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.153 [2024-11-20 10:40:19.834940] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834944] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.153 [2024-11-20 10:40:19.834947] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.153 [2024-11-20 10:40:19.834953] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.154 [2024-11-20 10:40:19.834963] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.154 [2024-11-20 10:40:19.835022] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.154 [2024-11-20 10:40:19.835028] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.154 [2024-11-20 10:40:19.835031] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835034] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.154 [2024-11-20 10:40:19.835042] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835045] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835048] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.154 [2024-11-20 10:40:19.835054] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.154 [2024-11-20 10:40:19.835063] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.154 [2024-11-20 10:40:19.835130] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.154 [2024-11-20 10:40:19.835135] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.154 [2024-11-20 10:40:19.835138] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835142] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.154 [2024-11-20 10:40:19.835151] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835154] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835157] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.154 [2024-11-20 10:40:19.835163] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.154 [2024-11-20 10:40:19.835172] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.154 [2024-11-20 10:40:19.835239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.154 [2024-11-20 10:40:19.835245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.154 [2024-11-20 10:40:19.835248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.154 [2024-11-20 10:40:19.835259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.154 [2024-11-20 10:40:19.835272] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.154 [2024-11-20 10:40:19.835282] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.154 [2024-11-20 10:40:19.835347] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.154 [2024-11-20 10:40:19.835353] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.154 [2024-11-20 10:40:19.835355] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835359] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.154 [2024-11-20 10:40:19.835368] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835372] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835375] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.154 [2024-11-20 10:40:19.835381] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.154 [2024-11-20 10:40:19.835390] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.154 [2024-11-20 10:40:19.835456] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.154 [2024-11-20 10:40:19.835461] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.154 [2024-11-20 10:40:19.835464] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835468] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.154 [2024-11-20 10:40:19.835476] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835480] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835483] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.154 [2024-11-20 10:40:19.835489] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.154 [2024-11-20 10:40:19.835499] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.154 [2024-11-20 10:40:19.835558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.154 [2024-11-20 10:40:19.835564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.154 [2024-11-20 10:40:19.835567] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.154 [2024-11-20 10:40:19.835579] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835582] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835585] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.154 [2024-11-20 10:40:19.835591] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.154 [2024-11-20 10:40:19.835600] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.154 [2024-11-20 10:40:19.835663] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.154 [2024-11-20 10:40:19.835669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.154 [2024-11-20 10:40:19.835672] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835675] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.154 [2024-11-20 10:40:19.835682] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835686] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835689] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.154 [2024-11-20 10:40:19.835694] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.154 [2024-11-20 10:40:19.835704] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.154 [2024-11-20 10:40:19.835771] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.154 [2024-11-20 10:40:19.835776] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.154 [2024-11-20 10:40:19.835779] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835782] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.154 [2024-11-20 10:40:19.835791] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835796] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835799] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.154 [2024-11-20 10:40:19.835805] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.154 [2024-11-20 10:40:19.835816] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.154 [2024-11-20 10:40:19.835874] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.154 [2024-11-20 10:40:19.835879] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.154 [2024-11-20 10:40:19.835882] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835885] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.154 [2024-11-20 10:40:19.835893] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835897] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835900] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.154 [2024-11-20 10:40:19.835906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.154 [2024-11-20 10:40:19.835915] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.154 [2024-11-20 10:40:19.835982] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.154 [2024-11-20 10:40:19.835987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.154 [2024-11-20 10:40:19.835990] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.835993] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.154 [2024-11-20 10:40:19.836002] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.836005] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.836009] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.154 [2024-11-20 10:40:19.836014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.154 [2024-11-20 10:40:19.836024] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.154 [2024-11-20 10:40:19.836085] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.154 [2024-11-20 10:40:19.836090] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.154 [2024-11-20 10:40:19.836093] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.836096] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.154 [2024-11-20 10:40:19.836104] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.836107] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.836110] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.154 [2024-11-20 10:40:19.836116] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.154 [2024-11-20 10:40:19.836125] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.154 [2024-11-20 10:40:19.836192] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.154 [2024-11-20 10:40:19.836197] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.154 [2024-11-20 10:40:19.836200] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.840210] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.154 [2024-11-20 10:40:19.840219] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.840222] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.154 [2024-11-20 10:40:19.840228] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x11d9690) 00:22:39.155 [2024-11-20 10:40:19.840234] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.155 [2024-11-20 10:40:19.840245] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x123b580, cid 3, qid 0 00:22:39.155 [2024-11-20 10:40:19.840393] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.155 [2024-11-20 10:40:19.840399] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.155 [2024-11-20 10:40:19.840402] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.155 [2024-11-20 10:40:19.840405] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x123b580) on tqpair=0x11d9690 00:22:39.155 [2024-11-20 10:40:19.840411] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery, 1] shutdown complete in 6 milliseconds 00:22:39.155 00:22:39.155 10:40:19 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:39.417 [2024-11-20 10:40:19.878102] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:22:39.417 [2024-11-20 10:40:19.878136] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3304461 ] 00:22:39.417 [2024-11-20 10:40:19.918373] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to connect adminq (no timeout) 00:22:39.417 [2024-11-20 10:40:19.918411] nvme_tcp.c:2238:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:39.417 [2024-11-20 10:40:19.918416] nvme_tcp.c:2242:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:39.417 [2024-11-20 10:40:19.918427] nvme_tcp.c:2263:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:39.417 [2024-11-20 10:40:19.918435] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:39.417 [2024-11-20 10:40:19.922374] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 0] setting state to wait for connect adminq (no timeout) 00:22:39.417 [2024-11-20 10:40:19.922403] nvme_tcp.c:1455:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x177c690 0 00:22:39.417 [2024-11-20 10:40:19.929212] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:39.417 [2024-11-20 10:40:19.929226] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:39.417 [2024-11-20 10:40:19.929230] nvme_tcp.c:1501:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:39.417 [2024-11-20 10:40:19.929234] nvme_tcp.c:1502:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:39.417 [2024-11-20 10:40:19.929260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.417 [2024-11-20 10:40:19.929265] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.417 [2024-11-20 10:40:19.929269] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c690) 00:22:39.417 [2024-11-20 10:40:19.929279] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:39.417 [2024-11-20 10:40:19.929297] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de100, cid 0, qid 0 00:22:39.417 [2024-11-20 10:40:19.936210] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.417 [2024-11-20 10:40:19.936218] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.417 [2024-11-20 10:40:19.936221] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.417 [2024-11-20 10:40:19.936225] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de100) on tqpair=0x177c690 00:22:39.417 [2024-11-20 10:40:19.936239] nvme_fabric.c: 621:nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:39.417 [2024-11-20 10:40:19.936246] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs (no timeout) 00:22:39.417 [2024-11-20 10:40:19.936250] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read vs wait for vs (no timeout) 00:22:39.417 [2024-11-20 10:40:19.936260] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.417 [2024-11-20 10:40:19.936264] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.417 [2024-11-20 10:40:19.936267] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c690) 00:22:39.417 [2024-11-20 10:40:19.936274] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.417 [2024-11-20 10:40:19.936287] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de100, cid 0, qid 0 00:22:39.417 [2024-11-20 10:40:19.936444] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.417 [2024-11-20 10:40:19.936450] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.417 [2024-11-20 10:40:19.936453] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.417 [2024-11-20 10:40:19.936456] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de100) on tqpair=0x177c690 00:22:39.417 [2024-11-20 10:40:19.936461] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap (no timeout) 00:22:39.417 [2024-11-20 10:40:19.936467] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to read cap wait for cap (no timeout) 00:22:39.417 [2024-11-20 10:40:19.936473] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.417 [2024-11-20 10:40:19.936477] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.936480] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c690) 00:22:39.418 [2024-11-20 10:40:19.936485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.418 [2024-11-20 10:40:19.936495] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de100, cid 0, qid 0 00:22:39.418 [2024-11-20 10:40:19.936558] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.418 [2024-11-20 10:40:19.936564] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.418 [2024-11-20 10:40:19.936567] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.936570] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de100) on tqpair=0x177c690 00:22:39.418 [2024-11-20 10:40:19.936575] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en (no timeout) 00:22:39.418 [2024-11-20 10:40:19.936581] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to check en wait for cc (timeout 15000 ms) 00:22:39.418 [2024-11-20 10:40:19.936587] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.936590] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.936593] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c690) 00:22:39.418 [2024-11-20 10:40:19.936599] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.418 [2024-11-20 10:40:19.936608] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de100, cid 0, qid 0 00:22:39.418 [2024-11-20 10:40:19.936670] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.418 [2024-11-20 10:40:19.936675] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.418 [2024-11-20 10:40:19.936678] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.936681] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de100) on tqpair=0x177c690 00:22:39.418 [2024-11-20 10:40:19.936688] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:39.418 [2024-11-20 10:40:19.936697] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.936701] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.936704] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c690) 00:22:39.418 [2024-11-20 10:40:19.936709] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.418 [2024-11-20 10:40:19.936719] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de100, cid 0, qid 0 00:22:39.418 [2024-11-20 10:40:19.936793] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.418 [2024-11-20 10:40:19.936799] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.418 [2024-11-20 10:40:19.936802] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.936805] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de100) on tqpair=0x177c690 00:22:39.418 [2024-11-20 10:40:19.936809] nvme_ctrlr.c:3906:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 0 && CSTS.RDY = 0 00:22:39.418 [2024-11-20 10:40:19.936813] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to controller is disabled (timeout 15000 ms) 00:22:39.418 [2024-11-20 10:40:19.936820] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:39.418 [2024-11-20 10:40:19.936927] nvme_ctrlr.c:4104:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Setting CC.EN = 1 00:22:39.418 [2024-11-20 10:40:19.936931] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:39.418 [2024-11-20 10:40:19.936938] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.936941] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.936944] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c690) 00:22:39.418 [2024-11-20 10:40:19.936949] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.418 [2024-11-20 10:40:19.936959] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de100, cid 0, qid 0 00:22:39.418 [2024-11-20 10:40:19.937040] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.418 [2024-11-20 10:40:19.937046] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.418 [2024-11-20 10:40:19.937049] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.937052] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de100) on tqpair=0x177c690 00:22:39.418 [2024-11-20 10:40:19.937056] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:39.418 [2024-11-20 10:40:19.937064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.937067] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.937071] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c690) 00:22:39.418 [2024-11-20 10:40:19.937076] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.418 [2024-11-20 10:40:19.937085] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de100, cid 0, qid 0 00:22:39.418 [2024-11-20 10:40:19.937140] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.418 [2024-11-20 10:40:19.937146] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.418 [2024-11-20 10:40:19.937149] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.937152] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de100) on tqpair=0x177c690 00:22:39.418 [2024-11-20 10:40:19.937158] nvme_ctrlr.c:3941:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:39.418 [2024-11-20 10:40:19.937162] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to reset admin queue (timeout 30000 ms) 00:22:39.418 [2024-11-20 10:40:19.937169] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller (no timeout) 00:22:39.418 [2024-11-20 10:40:19.937180] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify controller (timeout 30000 ms) 00:22:39.418 [2024-11-20 10:40:19.937188] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.937191] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c690) 00:22:39.418 [2024-11-20 10:40:19.937197] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.418 [2024-11-20 10:40:19.937212] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de100, cid 0, qid 0 00:22:39.418 [2024-11-20 10:40:19.937300] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:39.418 [2024-11-20 10:40:19.937306] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:39.418 [2024-11-20 10:40:19.937309] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.937312] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177c690): datao=0, datal=4096, cccid=0 00:22:39.418 [2024-11-20 10:40:19.937316] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17de100) on tqpair(0x177c690): expected_datao=0, payload_size=4096 00:22:39.418 [2024-11-20 10:40:19.937320] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.937331] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.937334] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.937376] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.418 [2024-11-20 10:40:19.937382] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.418 [2024-11-20 10:40:19.937385] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.937388] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de100) on tqpair=0x177c690 00:22:39.418 [2024-11-20 10:40:19.937394] nvme_ctrlr.c:2081:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_xfer_size 4294967295 00:22:39.418 [2024-11-20 10:40:19.937398] nvme_ctrlr.c:2085:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] MDTS max_xfer_size 131072 00:22:39.418 [2024-11-20 10:40:19.937402] nvme_ctrlr.c:2088:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] CNTLID 0x0001 00:22:39.418 [2024-11-20 10:40:19.937408] nvme_ctrlr.c:2112:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] transport max_sges 16 00:22:39.418 [2024-11-20 10:40:19.937412] nvme_ctrlr.c:2127:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] fuses compare and write: 1 00:22:39.418 [2024-11-20 10:40:19.937416] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to configure AER (timeout 30000 ms) 00:22:39.418 [2024-11-20 10:40:19.937425] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for configure aer (timeout 30000 ms) 00:22:39.418 [2024-11-20 10:40:19.937431] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.937434] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.937437] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c690) 00:22:39.418 [2024-11-20 10:40:19.937443] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:39.418 [2024-11-20 10:40:19.937456] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de100, cid 0, qid 0 00:22:39.418 [2024-11-20 10:40:19.937518] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.418 [2024-11-20 10:40:19.937523] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.418 [2024-11-20 10:40:19.937526] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.937530] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de100) on tqpair=0x177c690 00:22:39.418 [2024-11-20 10:40:19.937535] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.937539] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.937542] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x177c690) 00:22:39.418 [2024-11-20 10:40:19.937547] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.418 [2024-11-20 10:40:19.937552] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.937556] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.937559] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x177c690) 00:22:39.418 [2024-11-20 10:40:19.937563] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.418 [2024-11-20 10:40:19.937568] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.937572] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.418 [2024-11-20 10:40:19.937575] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x177c690) 00:22:39.419 [2024-11-20 10:40:19.937579] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.419 [2024-11-20 10:40:19.937584] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.937588] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.937590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c690) 00:22:39.419 [2024-11-20 10:40:19.937595] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.419 [2024-11-20 10:40:19.937600] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:39.419 [2024-11-20 10:40:19.937607] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:39.419 [2024-11-20 10:40:19.937613] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.937616] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177c690) 00:22:39.419 [2024-11-20 10:40:19.937621] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.419 [2024-11-20 10:40:19.937632] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de100, cid 0, qid 0 00:22:39.419 [2024-11-20 10:40:19.937637] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de280, cid 1, qid 0 00:22:39.419 [2024-11-20 10:40:19.937641] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de400, cid 2, qid 0 00:22:39.419 [2024-11-20 10:40:19.937645] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de580, cid 3, qid 0 00:22:39.419 [2024-11-20 10:40:19.937649] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de700, cid 4, qid 0 00:22:39.419 [2024-11-20 10:40:19.937745] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.419 [2024-11-20 10:40:19.937751] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.419 [2024-11-20 10:40:19.937754] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.937757] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de700) on tqpair=0x177c690 00:22:39.419 [2024-11-20 10:40:19.937764] nvme_ctrlr.c:3059:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Sending keep alive every 5000000 us 00:22:39.419 [2024-11-20 10:40:19.937769] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:39.419 [2024-11-20 10:40:19.937776] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set number of queues (timeout 30000 ms) 00:22:39.419 [2024-11-20 10:40:19.937782] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:39.419 [2024-11-20 10:40:19.937787] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.937791] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.937794] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177c690) 00:22:39.419 [2024-11-20 10:40:19.937800] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:39.419 [2024-11-20 10:40:19.937809] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de700, cid 4, qid 0 00:22:39.419 [2024-11-20 10:40:19.937871] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.419 [2024-11-20 10:40:19.937877] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.419 [2024-11-20 10:40:19.937880] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.937883] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de700) on tqpair=0x177c690 00:22:39.419 [2024-11-20 10:40:19.937933] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify active ns (timeout 30000 ms) 00:22:39.419 [2024-11-20 10:40:19.937943] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:39.419 [2024-11-20 10:40:19.937950] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.937954] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177c690) 00:22:39.419 [2024-11-20 10:40:19.937959] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.419 [2024-11-20 10:40:19.937969] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de700, cid 4, qid 0 00:22:39.419 [2024-11-20 10:40:19.938045] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:39.419 [2024-11-20 10:40:19.938051] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:39.419 [2024-11-20 10:40:19.938054] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.938057] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177c690): datao=0, datal=4096, cccid=4 00:22:39.419 [2024-11-20 10:40:19.938060] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17de700) on tqpair(0x177c690): expected_datao=0, payload_size=4096 00:22:39.419 [2024-11-20 10:40:19.938064] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.938070] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.938073] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.938082] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.419 [2024-11-20 10:40:19.938087] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.419 [2024-11-20 10:40:19.938090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.938093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de700) on tqpair=0x177c690 00:22:39.419 [2024-11-20 10:40:19.938102] nvme_ctrlr.c:4735:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Namespace 1 was added 00:22:39.419 [2024-11-20 10:40:19.938111] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns (timeout 30000 ms) 00:22:39.419 [2024-11-20 10:40:19.938120] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify ns (timeout 30000 ms) 00:22:39.419 [2024-11-20 10:40:19.938126] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.938130] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177c690) 00:22:39.419 [2024-11-20 10:40:19.938135] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.419 [2024-11-20 10:40:19.938145] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de700, cid 4, qid 0 00:22:39.419 [2024-11-20 10:40:19.938227] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:39.419 [2024-11-20 10:40:19.938233] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:39.419 [2024-11-20 10:40:19.938236] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.938239] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177c690): datao=0, datal=4096, cccid=4 00:22:39.419 [2024-11-20 10:40:19.938243] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17de700) on tqpair(0x177c690): expected_datao=0, payload_size=4096 00:22:39.419 [2024-11-20 10:40:19.938246] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.938256] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.938260] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.979340] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.419 [2024-11-20 10:40:19.979350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.419 [2024-11-20 10:40:19.979353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.979356] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de700) on tqpair=0x177c690 00:22:39.419 [2024-11-20 10:40:19.979370] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:39.419 [2024-11-20 10:40:19.979379] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:39.419 [2024-11-20 10:40:19.979386] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.979389] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177c690) 00:22:39.419 [2024-11-20 10:40:19.979396] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.419 [2024-11-20 10:40:19.979407] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de700, cid 4, qid 0 00:22:39.419 [2024-11-20 10:40:19.979477] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:39.419 [2024-11-20 10:40:19.979483] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:39.419 [2024-11-20 10:40:19.979486] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.979489] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177c690): datao=0, datal=4096, cccid=4 00:22:39.419 [2024-11-20 10:40:19.979493] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17de700) on tqpair(0x177c690): expected_datao=0, payload_size=4096 00:22:39.419 [2024-11-20 10:40:19.979497] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.979507] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:19.979510] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:20.021344] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.419 [2024-11-20 10:40:20.021356] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.419 [2024-11-20 10:40:20.021360] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.419 [2024-11-20 10:40:20.021366] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de700) on tqpair=0x177c690 00:22:39.419 [2024-11-20 10:40:20.021375] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:39.419 [2024-11-20 10:40:20.021382] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported log pages (timeout 30000 ms) 00:22:39.419 [2024-11-20 10:40:20.021391] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set supported features (timeout 30000 ms) 00:22:39.419 [2024-11-20 10:40:20.021397] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:39.419 [2024-11-20 10:40:20.021402] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:39.419 [2024-11-20 10:40:20.021407] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to set host ID (timeout 30000 ms) 00:22:39.419 [2024-11-20 10:40:20.021412] nvme_ctrlr.c:3147:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] NVMe-oF transport - not sending Set Features - Host ID 00:22:39.419 [2024-11-20 10:40:20.021417] nvme_ctrlr.c:1561:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to transport ready (timeout 30000 ms) 00:22:39.419 [2024-11-20 10:40:20.021422] nvme_ctrlr.c:1567:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] setting state to ready (no timeout) 00:22:39.419 [2024-11-20 10:40:20.021435] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.021439] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177c690) 00:22:39.420 [2024-11-20 10:40:20.021446] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.420 [2024-11-20 10:40:20.021452] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.021456] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.021459] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x177c690) 00:22:39.420 [2024-11-20 10:40:20.021464] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:39.420 [2024-11-20 10:40:20.021479] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de700, cid 4, qid 0 00:22:39.420 [2024-11-20 10:40:20.021485] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de880, cid 5, qid 0 00:22:39.420 [2024-11-20 10:40:20.021568] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.420 [2024-11-20 10:40:20.021574] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.420 [2024-11-20 10:40:20.021577] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.021580] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de700) on tqpair=0x177c690 00:22:39.420 [2024-11-20 10:40:20.021586] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.420 [2024-11-20 10:40:20.021591] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.420 [2024-11-20 10:40:20.021594] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.021598] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de880) on tqpair=0x177c690 00:22:39.420 [2024-11-20 10:40:20.021606] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.021609] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x177c690) 00:22:39.420 [2024-11-20 10:40:20.021615] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.420 [2024-11-20 10:40:20.021625] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de880, cid 5, qid 0 00:22:39.420 [2024-11-20 10:40:20.021705] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.420 [2024-11-20 10:40:20.021711] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.420 [2024-11-20 10:40:20.021714] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.021717] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de880) on tqpair=0x177c690 00:22:39.420 [2024-11-20 10:40:20.021726] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.021730] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x177c690) 00:22:39.420 [2024-11-20 10:40:20.021736] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.420 [2024-11-20 10:40:20.021745] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de880, cid 5, qid 0 00:22:39.420 [2024-11-20 10:40:20.021815] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.420 [2024-11-20 10:40:20.021821] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.420 [2024-11-20 10:40:20.021824] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.021828] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de880) on tqpair=0x177c690 00:22:39.420 [2024-11-20 10:40:20.021836] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.021840] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x177c690) 00:22:39.420 [2024-11-20 10:40:20.021845] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.420 [2024-11-20 10:40:20.021855] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de880, cid 5, qid 0 00:22:39.420 [2024-11-20 10:40:20.021930] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.420 [2024-11-20 10:40:20.021936] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.420 [2024-11-20 10:40:20.021939] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.021943] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de880) on tqpair=0x177c690 00:22:39.420 [2024-11-20 10:40:20.021957] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.021961] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x177c690) 00:22:39.420 [2024-11-20 10:40:20.021967] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.420 [2024-11-20 10:40:20.021973] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.021976] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x177c690) 00:22:39.420 [2024-11-20 10:40:20.021982] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.420 [2024-11-20 10:40:20.021988] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.021991] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0x177c690) 00:22:39.420 [2024-11-20 10:40:20.021997] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.420 [2024-11-20 10:40:20.022003] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022007] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x177c690) 00:22:39.420 [2024-11-20 10:40:20.022013] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.420 [2024-11-20 10:40:20.022024] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de880, cid 5, qid 0 00:22:39.420 [2024-11-20 10:40:20.022028] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de700, cid 4, qid 0 00:22:39.420 [2024-11-20 10:40:20.022036] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17dea00, cid 6, qid 0 00:22:39.420 [2024-11-20 10:40:20.022041] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17deb80, cid 7, qid 0 00:22:39.420 [2024-11-20 10:40:20.022171] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:39.420 [2024-11-20 10:40:20.022177] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:39.420 [2024-11-20 10:40:20.022180] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022183] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177c690): datao=0, datal=8192, cccid=5 00:22:39.420 [2024-11-20 10:40:20.022187] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17de880) on tqpair(0x177c690): expected_datao=0, payload_size=8192 00:22:39.420 [2024-11-20 10:40:20.022192] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022223] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022228] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022232] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:39.420 [2024-11-20 10:40:20.022237] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:39.420 [2024-11-20 10:40:20.022240] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022243] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177c690): datao=0, datal=512, cccid=4 00:22:39.420 [2024-11-20 10:40:20.022247] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17de700) on tqpair(0x177c690): expected_datao=0, payload_size=512 00:22:39.420 [2024-11-20 10:40:20.022251] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022256] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022259] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022264] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:39.420 [2024-11-20 10:40:20.022269] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:39.420 [2024-11-20 10:40:20.022272] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022275] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177c690): datao=0, datal=512, cccid=6 00:22:39.420 [2024-11-20 10:40:20.022279] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17dea00) on tqpair(0x177c690): expected_datao=0, payload_size=512 00:22:39.420 [2024-11-20 10:40:20.022283] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022288] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022291] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022295] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:39.420 [2024-11-20 10:40:20.022300] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:39.420 [2024-11-20 10:40:20.022303] nvme_tcp.c:1619:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022306] nvme_tcp.c:1620:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x177c690): datao=0, datal=4096, cccid=7 00:22:39.420 [2024-11-20 10:40:20.022310] nvme_tcp.c:1631:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x17deb80) on tqpair(0x177c690): expected_datao=0, payload_size=4096 00:22:39.420 [2024-11-20 10:40:20.022313] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022319] nvme_tcp.c:1421:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022322] nvme_tcp.c:1255:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022329] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.420 [2024-11-20 10:40:20.022334] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.420 [2024-11-20 10:40:20.022338] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022341] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de880) on tqpair=0x177c690 00:22:39.420 [2024-11-20 10:40:20.022353] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.420 [2024-11-20 10:40:20.022358] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.420 [2024-11-20 10:40:20.022362] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022365] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de700) on tqpair=0x177c690 00:22:39.420 [2024-11-20 10:40:20.022373] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.420 [2024-11-20 10:40:20.022378] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.420 [2024-11-20 10:40:20.022382] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022385] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17dea00) on tqpair=0x177c690 00:22:39.420 [2024-11-20 10:40:20.022391] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.420 [2024-11-20 10:40:20.022396] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.420 [2024-11-20 10:40:20.022400] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.420 [2024-11-20 10:40:20.022403] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17deb80) on tqpair=0x177c690 00:22:39.420 ===================================================== 00:22:39.421 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:39.421 ===================================================== 00:22:39.421 Controller Capabilities/Features 00:22:39.421 ================================ 00:22:39.421 Vendor ID: 8086 00:22:39.421 Subsystem Vendor ID: 8086 00:22:39.421 Serial Number: SPDK00000000000001 00:22:39.421 Model Number: SPDK bdev Controller 00:22:39.421 Firmware Version: 25.01 00:22:39.421 Recommended Arb Burst: 6 00:22:39.421 IEEE OUI Identifier: e4 d2 5c 00:22:39.421 Multi-path I/O 00:22:39.421 May have multiple subsystem ports: Yes 00:22:39.421 May have multiple controllers: Yes 00:22:39.421 Associated with SR-IOV VF: No 00:22:39.421 Max Data Transfer Size: 131072 00:22:39.421 Max Number of Namespaces: 32 00:22:39.421 Max Number of I/O Queues: 127 00:22:39.421 NVMe Specification Version (VS): 1.3 00:22:39.421 NVMe Specification Version (Identify): 1.3 00:22:39.421 Maximum Queue Entries: 128 00:22:39.421 Contiguous Queues Required: Yes 00:22:39.421 Arbitration Mechanisms Supported 00:22:39.421 Weighted Round Robin: Not Supported 00:22:39.421 Vendor Specific: Not Supported 00:22:39.421 Reset Timeout: 15000 ms 00:22:39.421 Doorbell Stride: 4 bytes 00:22:39.421 NVM Subsystem Reset: Not Supported 00:22:39.421 Command Sets Supported 00:22:39.421 NVM Command Set: Supported 00:22:39.421 Boot Partition: Not Supported 00:22:39.421 Memory Page Size Minimum: 4096 bytes 00:22:39.421 Memory Page Size Maximum: 4096 bytes 00:22:39.421 Persistent Memory Region: Not Supported 00:22:39.421 Optional Asynchronous Events Supported 00:22:39.421 Namespace Attribute Notices: Supported 00:22:39.421 Firmware Activation Notices: Not Supported 00:22:39.421 ANA Change Notices: Not Supported 00:22:39.421 PLE Aggregate Log Change Notices: Not Supported 00:22:39.421 LBA Status Info Alert Notices: Not Supported 00:22:39.421 EGE Aggregate Log Change Notices: Not Supported 00:22:39.421 Normal NVM Subsystem Shutdown event: Not Supported 00:22:39.421 Zone Descriptor Change Notices: Not Supported 00:22:39.421 Discovery Log Change Notices: Not Supported 00:22:39.421 Controller Attributes 00:22:39.421 128-bit Host Identifier: Supported 00:22:39.421 Non-Operational Permissive Mode: Not Supported 00:22:39.421 NVM Sets: Not Supported 00:22:39.421 Read Recovery Levels: Not Supported 00:22:39.421 Endurance Groups: Not Supported 00:22:39.421 Predictable Latency Mode: Not Supported 00:22:39.421 Traffic Based Keep ALive: Not Supported 00:22:39.421 Namespace Granularity: Not Supported 00:22:39.421 SQ Associations: Not Supported 00:22:39.421 UUID List: Not Supported 00:22:39.421 Multi-Domain Subsystem: Not Supported 00:22:39.421 Fixed Capacity Management: Not Supported 00:22:39.421 Variable Capacity Management: Not Supported 00:22:39.421 Delete Endurance Group: Not Supported 00:22:39.421 Delete NVM Set: Not Supported 00:22:39.421 Extended LBA Formats Supported: Not Supported 00:22:39.421 Flexible Data Placement Supported: Not Supported 00:22:39.421 00:22:39.421 Controller Memory Buffer Support 00:22:39.421 ================================ 00:22:39.421 Supported: No 00:22:39.421 00:22:39.421 Persistent Memory Region Support 00:22:39.421 ================================ 00:22:39.421 Supported: No 00:22:39.421 00:22:39.421 Admin Command Set Attributes 00:22:39.421 ============================ 00:22:39.421 Security Send/Receive: Not Supported 00:22:39.421 Format NVM: Not Supported 00:22:39.421 Firmware Activate/Download: Not Supported 00:22:39.421 Namespace Management: Not Supported 00:22:39.421 Device Self-Test: Not Supported 00:22:39.421 Directives: Not Supported 00:22:39.421 NVMe-MI: Not Supported 00:22:39.421 Virtualization Management: Not Supported 00:22:39.421 Doorbell Buffer Config: Not Supported 00:22:39.421 Get LBA Status Capability: Not Supported 00:22:39.421 Command & Feature Lockdown Capability: Not Supported 00:22:39.421 Abort Command Limit: 4 00:22:39.421 Async Event Request Limit: 4 00:22:39.421 Number of Firmware Slots: N/A 00:22:39.421 Firmware Slot 1 Read-Only: N/A 00:22:39.421 Firmware Activation Without Reset: N/A 00:22:39.421 Multiple Update Detection Support: N/A 00:22:39.421 Firmware Update Granularity: No Information Provided 00:22:39.421 Per-Namespace SMART Log: No 00:22:39.421 Asymmetric Namespace Access Log Page: Not Supported 00:22:39.421 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:39.421 Command Effects Log Page: Supported 00:22:39.421 Get Log Page Extended Data: Supported 00:22:39.421 Telemetry Log Pages: Not Supported 00:22:39.421 Persistent Event Log Pages: Not Supported 00:22:39.421 Supported Log Pages Log Page: May Support 00:22:39.421 Commands Supported & Effects Log Page: Not Supported 00:22:39.421 Feature Identifiers & Effects Log Page:May Support 00:22:39.421 NVMe-MI Commands & Effects Log Page: May Support 00:22:39.421 Data Area 4 for Telemetry Log: Not Supported 00:22:39.421 Error Log Page Entries Supported: 128 00:22:39.421 Keep Alive: Supported 00:22:39.421 Keep Alive Granularity: 10000 ms 00:22:39.421 00:22:39.421 NVM Command Set Attributes 00:22:39.421 ========================== 00:22:39.421 Submission Queue Entry Size 00:22:39.421 Max: 64 00:22:39.421 Min: 64 00:22:39.421 Completion Queue Entry Size 00:22:39.421 Max: 16 00:22:39.421 Min: 16 00:22:39.421 Number of Namespaces: 32 00:22:39.421 Compare Command: Supported 00:22:39.421 Write Uncorrectable Command: Not Supported 00:22:39.421 Dataset Management Command: Supported 00:22:39.421 Write Zeroes Command: Supported 00:22:39.421 Set Features Save Field: Not Supported 00:22:39.421 Reservations: Supported 00:22:39.421 Timestamp: Not Supported 00:22:39.421 Copy: Supported 00:22:39.421 Volatile Write Cache: Present 00:22:39.421 Atomic Write Unit (Normal): 1 00:22:39.421 Atomic Write Unit (PFail): 1 00:22:39.421 Atomic Compare & Write Unit: 1 00:22:39.421 Fused Compare & Write: Supported 00:22:39.421 Scatter-Gather List 00:22:39.421 SGL Command Set: Supported 00:22:39.421 SGL Keyed: Supported 00:22:39.421 SGL Bit Bucket Descriptor: Not Supported 00:22:39.421 SGL Metadata Pointer: Not Supported 00:22:39.421 Oversized SGL: Not Supported 00:22:39.421 SGL Metadata Address: Not Supported 00:22:39.421 SGL Offset: Supported 00:22:39.421 Transport SGL Data Block: Not Supported 00:22:39.421 Replay Protected Memory Block: Not Supported 00:22:39.421 00:22:39.421 Firmware Slot Information 00:22:39.421 ========================= 00:22:39.421 Active slot: 1 00:22:39.421 Slot 1 Firmware Revision: 25.01 00:22:39.421 00:22:39.421 00:22:39.421 Commands Supported and Effects 00:22:39.421 ============================== 00:22:39.421 Admin Commands 00:22:39.421 -------------- 00:22:39.421 Get Log Page (02h): Supported 00:22:39.421 Identify (06h): Supported 00:22:39.421 Abort (08h): Supported 00:22:39.421 Set Features (09h): Supported 00:22:39.421 Get Features (0Ah): Supported 00:22:39.421 Asynchronous Event Request (0Ch): Supported 00:22:39.421 Keep Alive (18h): Supported 00:22:39.421 I/O Commands 00:22:39.421 ------------ 00:22:39.421 Flush (00h): Supported LBA-Change 00:22:39.421 Write (01h): Supported LBA-Change 00:22:39.421 Read (02h): Supported 00:22:39.421 Compare (05h): Supported 00:22:39.421 Write Zeroes (08h): Supported LBA-Change 00:22:39.421 Dataset Management (09h): Supported LBA-Change 00:22:39.421 Copy (19h): Supported LBA-Change 00:22:39.421 00:22:39.421 Error Log 00:22:39.421 ========= 00:22:39.421 00:22:39.421 Arbitration 00:22:39.421 =========== 00:22:39.421 Arbitration Burst: 1 00:22:39.421 00:22:39.421 Power Management 00:22:39.421 ================ 00:22:39.421 Number of Power States: 1 00:22:39.421 Current Power State: Power State #0 00:22:39.421 Power State #0: 00:22:39.421 Max Power: 0.00 W 00:22:39.421 Non-Operational State: Operational 00:22:39.421 Entry Latency: Not Reported 00:22:39.421 Exit Latency: Not Reported 00:22:39.421 Relative Read Throughput: 0 00:22:39.421 Relative Read Latency: 0 00:22:39.421 Relative Write Throughput: 0 00:22:39.421 Relative Write Latency: 0 00:22:39.421 Idle Power: Not Reported 00:22:39.421 Active Power: Not Reported 00:22:39.421 Non-Operational Permissive Mode: Not Supported 00:22:39.421 00:22:39.421 Health Information 00:22:39.421 ================== 00:22:39.421 Critical Warnings: 00:22:39.421 Available Spare Space: OK 00:22:39.421 Temperature: OK 00:22:39.421 Device Reliability: OK 00:22:39.421 Read Only: No 00:22:39.421 Volatile Memory Backup: OK 00:22:39.421 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:39.421 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:39.421 Available Spare: 0% 00:22:39.421 Available Spare Threshold: 0% 00:22:39.421 Life Percentage Used:[2024-11-20 10:40:20.022482] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.022487] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0x177c690) 00:22:39.422 [2024-11-20 10:40:20.022493] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.422 [2024-11-20 10:40:20.022505] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17deb80, cid 7, qid 0 00:22:39.422 [2024-11-20 10:40:20.022587] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.422 [2024-11-20 10:40:20.022593] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.422 [2024-11-20 10:40:20.022596] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.022599] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17deb80) on tqpair=0x177c690 00:22:39.422 [2024-11-20 10:40:20.022628] nvme_ctrlr.c:4399:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] Prepare to destruct SSD 00:22:39.422 [2024-11-20 10:40:20.022637] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de100) on tqpair=0x177c690 00:22:39.422 [2024-11-20 10:40:20.022643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.422 [2024-11-20 10:40:20.022648] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de280) on tqpair=0x177c690 00:22:39.422 [2024-11-20 10:40:20.022652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.422 [2024-11-20 10:40:20.022657] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de400) on tqpair=0x177c690 00:22:39.422 [2024-11-20 10:40:20.022661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.422 [2024-11-20 10:40:20.022666] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de580) on tqpair=0x177c690 00:22:39.422 [2024-11-20 10:40:20.022670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:39.422 [2024-11-20 10:40:20.022677] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.022681] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.022684] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c690) 00:22:39.422 [2024-11-20 10:40:20.022690] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.422 [2024-11-20 10:40:20.022702] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de580, cid 3, qid 0 00:22:39.422 [2024-11-20 10:40:20.022767] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.422 [2024-11-20 10:40:20.022773] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.422 [2024-11-20 10:40:20.022776] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.022780] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de580) on tqpair=0x177c690 00:22:39.422 [2024-11-20 10:40:20.022785] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.022789] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.022792] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c690) 00:22:39.422 [2024-11-20 10:40:20.022798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.422 [2024-11-20 10:40:20.022811] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de580, cid 3, qid 0 00:22:39.422 [2024-11-20 10:40:20.022886] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.422 [2024-11-20 10:40:20.022892] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.422 [2024-11-20 10:40:20.022895] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.022899] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de580) on tqpair=0x177c690 00:22:39.422 [2024-11-20 10:40:20.022903] nvme_ctrlr.c:1151:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] RTD3E = 0 us 00:22:39.422 [2024-11-20 10:40:20.022907] nvme_ctrlr.c:1154:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown timeout = 10000 ms 00:22:39.422 [2024-11-20 10:40:20.022915] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.022919] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.022922] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c690) 00:22:39.422 [2024-11-20 10:40:20.022928] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.422 [2024-11-20 10:40:20.022938] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de580, cid 3, qid 0 00:22:39.422 [2024-11-20 10:40:20.023007] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.422 [2024-11-20 10:40:20.023012] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.422 [2024-11-20 10:40:20.023015] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.023019] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de580) on tqpair=0x177c690 00:22:39.422 [2024-11-20 10:40:20.023028] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.023032] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.023035] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c690) 00:22:39.422 [2024-11-20 10:40:20.023041] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.422 [2024-11-20 10:40:20.023051] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de580, cid 3, qid 0 00:22:39.422 [2024-11-20 10:40:20.023120] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.422 [2024-11-20 10:40:20.023126] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.422 [2024-11-20 10:40:20.023129] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.023133] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de580) on tqpair=0x177c690 00:22:39.422 [2024-11-20 10:40:20.023140] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.023144] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.023147] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c690) 00:22:39.422 [2024-11-20 10:40:20.023153] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.422 [2024-11-20 10:40:20.023164] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de580, cid 3, qid 0 00:22:39.422 [2024-11-20 10:40:20.023239] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.422 [2024-11-20 10:40:20.023245] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.422 [2024-11-20 10:40:20.023248] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.023251] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de580) on tqpair=0x177c690 00:22:39.422 [2024-11-20 10:40:20.023259] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.023263] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.023266] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c690) 00:22:39.422 [2024-11-20 10:40:20.023271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.422 [2024-11-20 10:40:20.023281] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de580, cid 3, qid 0 00:22:39.422 [2024-11-20 10:40:20.023345] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.422 [2024-11-20 10:40:20.023350] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.422 [2024-11-20 10:40:20.023353] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.023357] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de580) on tqpair=0x177c690 00:22:39.422 [2024-11-20 10:40:20.023365] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.023369] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.023372] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c690) 00:22:39.422 [2024-11-20 10:40:20.023378] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.422 [2024-11-20 10:40:20.023387] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de580, cid 3, qid 0 00:22:39.422 [2024-11-20 10:40:20.023451] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.422 [2024-11-20 10:40:20.023457] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.422 [2024-11-20 10:40:20.023460] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.023463] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de580) on tqpair=0x177c690 00:22:39.422 [2024-11-20 10:40:20.023472] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.023476] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.422 [2024-11-20 10:40:20.023479] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c690) 00:22:39.422 [2024-11-20 10:40:20.023484] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.422 [2024-11-20 10:40:20.023494] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de580, cid 3, qid 0 00:22:39.422 [2024-11-20 10:40:20.023563] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.422 [2024-11-20 10:40:20.023568] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.422 [2024-11-20 10:40:20.023572] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.023575] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de580) on tqpair=0x177c690 00:22:39.423 [2024-11-20 10:40:20.023583] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.023587] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.023590] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c690) 00:22:39.423 [2024-11-20 10:40:20.023596] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.423 [2024-11-20 10:40:20.023605] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de580, cid 3, qid 0 00:22:39.423 [2024-11-20 10:40:20.023663] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.423 [2024-11-20 10:40:20.023669] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.423 [2024-11-20 10:40:20.023672] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.023676] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de580) on tqpair=0x177c690 00:22:39.423 [2024-11-20 10:40:20.023683] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.023687] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.023690] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c690) 00:22:39.423 [2024-11-20 10:40:20.023696] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.423 [2024-11-20 10:40:20.023705] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de580, cid 3, qid 0 00:22:39.423 [2024-11-20 10:40:20.023766] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.423 [2024-11-20 10:40:20.023772] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.423 [2024-11-20 10:40:20.023774] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.023778] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de580) on tqpair=0x177c690 00:22:39.423 [2024-11-20 10:40:20.023786] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.023790] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.023793] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c690) 00:22:39.423 [2024-11-20 10:40:20.023798] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.423 [2024-11-20 10:40:20.023808] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de580, cid 3, qid 0 00:22:39.423 [2024-11-20 10:40:20.023864] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.423 [2024-11-20 10:40:20.023869] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.423 [2024-11-20 10:40:20.023872] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.023876] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de580) on tqpair=0x177c690 00:22:39.423 [2024-11-20 10:40:20.023884] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.023887] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.023890] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c690) 00:22:39.423 [2024-11-20 10:40:20.023896] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.423 [2024-11-20 10:40:20.023906] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de580, cid 3, qid 0 00:22:39.423 [2024-11-20 10:40:20.023980] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.423 [2024-11-20 10:40:20.023987] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.423 [2024-11-20 10:40:20.023990] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.023993] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de580) on tqpair=0x177c690 00:22:39.423 [2024-11-20 10:40:20.024001] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.024005] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.024008] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c690) 00:22:39.423 [2024-11-20 10:40:20.024014] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.423 [2024-11-20 10:40:20.024023] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de580, cid 3, qid 0 00:22:39.423 [2024-11-20 10:40:20.024080] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.423 [2024-11-20 10:40:20.024086] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.423 [2024-11-20 10:40:20.024090] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.024093] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de580) on tqpair=0x177c690 00:22:39.423 [2024-11-20 10:40:20.024101] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.024104] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.024107] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c690) 00:22:39.423 [2024-11-20 10:40:20.024112] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.423 [2024-11-20 10:40:20.024122] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de580, cid 3, qid 0 00:22:39.423 [2024-11-20 10:40:20.024183] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.423 [2024-11-20 10:40:20.024188] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.423 [2024-11-20 10:40:20.024192] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.024196] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de580) on tqpair=0x177c690 00:22:39.423 [2024-11-20 10:40:20.028211] nvme_tcp.c: 732:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.028217] nvme_tcp.c: 909:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.028220] nvme_tcp.c: 918:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x177c690) 00:22:39.423 [2024-11-20 10:40:20.028226] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:39.423 [2024-11-20 10:40:20.028237] nvme_tcp.c: 883:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x17de580, cid 3, qid 0 00:22:39.423 [2024-11-20 10:40:20.028389] nvme_tcp.c:1130:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:39.423 [2024-11-20 10:40:20.028395] nvme_tcp.c:1875:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:39.423 [2024-11-20 10:40:20.028398] nvme_tcp.c:1548:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:39.423 [2024-11-20 10:40:20.028401] nvme_tcp.c:1011:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x17de580) on tqpair=0x177c690 00:22:39.423 [2024-11-20 10:40:20.028408] nvme_ctrlr.c:1273:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1, 1] shutdown complete in 5 milliseconds 00:22:39.423 0% 00:22:39.423 Data Units Read: 0 00:22:39.423 Data Units Written: 0 00:22:39.423 Host Read Commands: 0 00:22:39.423 Host Write Commands: 0 00:22:39.423 Controller Busy Time: 0 minutes 00:22:39.423 Power Cycles: 0 00:22:39.423 Power On Hours: 0 hours 00:22:39.423 Unsafe Shutdowns: 0 00:22:39.423 Unrecoverable Media Errors: 0 00:22:39.423 Lifetime Error Log Entries: 0 00:22:39.423 Warning Temperature Time: 0 minutes 00:22:39.423 Critical Temperature Time: 0 minutes 00:22:39.423 00:22:39.423 Number of Queues 00:22:39.423 ================ 00:22:39.423 Number of I/O Submission Queues: 127 00:22:39.423 Number of I/O Completion Queues: 127 00:22:39.423 00:22:39.423 Active Namespaces 00:22:39.423 ================= 00:22:39.423 Namespace ID:1 00:22:39.423 Error Recovery Timeout: Unlimited 00:22:39.423 Command Set Identifier: NVM (00h) 00:22:39.423 Deallocate: Supported 00:22:39.423 Deallocated/Unwritten Error: Not Supported 00:22:39.423 Deallocated Read Value: Unknown 00:22:39.423 Deallocate in Write Zeroes: Not Supported 00:22:39.423 Deallocated Guard Field: 0xFFFF 00:22:39.423 Flush: Supported 00:22:39.423 Reservation: Supported 00:22:39.423 Namespace Sharing Capabilities: Multiple Controllers 00:22:39.423 Size (in LBAs): 131072 (0GiB) 00:22:39.423 Capacity (in LBAs): 131072 (0GiB) 00:22:39.423 Utilization (in LBAs): 131072 (0GiB) 00:22:39.423 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:39.423 EUI64: ABCDEF0123456789 00:22:39.423 UUID: f78731cb-1c09-4753-a6c6-4058bdfdb15c 00:22:39.423 Thin Provisioning: Not Supported 00:22:39.423 Per-NS Atomic Units: Yes 00:22:39.423 Atomic Boundary Size (Normal): 0 00:22:39.423 Atomic Boundary Size (PFail): 0 00:22:39.423 Atomic Boundary Offset: 0 00:22:39.423 Maximum Single Source Range Length: 65535 00:22:39.423 Maximum Copy Length: 65535 00:22:39.423 Maximum Source Range Count: 1 00:22:39.423 NGUID/EUI64 Never Reused: No 00:22:39.423 Namespace Write Protected: No 00:22:39.423 Number of LBA Formats: 1 00:22:39.423 Current LBA Format: LBA Format #00 00:22:39.423 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:39.423 00:22:39.423 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:39.423 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:39.423 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@563 -- # xtrace_disable 00:22:39.423 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:39.423 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:22:39.423 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:39.423 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:39.423 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # nvmfcleanup 00:22:39.423 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@99 -- # sync 00:22:39.423 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:22:39.423 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@102 -- # set +e 00:22:39.423 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@103 -- # for i in {1..20} 00:22:39.423 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:22:39.423 rmmod nvme_tcp 00:22:39.423 rmmod nvme_fabrics 00:22:39.423 rmmod nvme_keyring 00:22:39.423 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:22:39.423 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@106 -- # set -e 00:22:39.423 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@107 -- # return 0 00:22:39.424 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@336 -- # '[' -n 3304214 ']' 00:22:39.424 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@337 -- # killprocess 3304214 00:22:39.424 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # '[' -z 3304214 ']' 00:22:39.424 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # kill -0 3304214 00:22:39.424 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # uname 00:22:39.424 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:22:39.424 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3304214 00:22:39.683 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:22:39.683 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:22:39.683 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3304214' 00:22:39.683 killing process with pid 3304214 00:22:39.683 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@973 -- # kill 3304214 00:22:39.683 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@978 -- # wait 3304214 00:22:39.683 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:22:39.683 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # nvmf_fini 00:22:39.683 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@264 -- # local dev 00:22:39.683 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@267 -- # remove_target_ns 00:22:39.683 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:39.683 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:39.683 10:40:20 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@268 -- # delete_main_bridge 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@130 -- # return 0 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # _dev=0 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@41 -- # dev_map=() 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/setup.sh@284 -- # iptr 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@542 -- # iptables-save 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@542 -- # iptables-restore 00:22:42.216 00:22:42.216 real 0m10.133s 00:22:42.216 user 0m8.245s 00:22:42.216 sys 0m4.949s 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:42.216 ************************************ 00:22:42.216 END TEST nvmf_identify 00:22:42.216 ************************************ 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@21 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:42.216 ************************************ 00:22:42.216 START TEST nvmf_perf 00:22:42.216 ************************************ 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:42.216 * Looking for test storage... 00:22:42.216 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lcov --version 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # IFS=.-: 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@336 -- # read -ra ver1 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # IFS=.-: 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@337 -- # read -ra ver2 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@338 -- # local 'op=<' 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@340 -- # ver1_l=2 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@341 -- # ver2_l=1 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@344 -- # case "$op" in 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@345 -- # : 1 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v = 0 )) 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # decimal 1 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=1 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 1 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@365 -- # ver1[v]=1 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # decimal 2 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@353 -- # local d=2 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@355 -- # echo 2 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@366 -- # ver2[v]=2 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@368 -- # return 0 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:22:42.216 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:22:42.216 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.216 --rc genhtml_branch_coverage=1 00:22:42.216 --rc genhtml_function_coverage=1 00:22:42.216 --rc genhtml_legend=1 00:22:42.216 --rc geninfo_all_blocks=1 00:22:42.217 --rc geninfo_unexecuted_blocks=1 00:22:42.217 00:22:42.217 ' 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:22:42.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.217 --rc genhtml_branch_coverage=1 00:22:42.217 --rc genhtml_function_coverage=1 00:22:42.217 --rc genhtml_legend=1 00:22:42.217 --rc geninfo_all_blocks=1 00:22:42.217 --rc geninfo_unexecuted_blocks=1 00:22:42.217 00:22:42.217 ' 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:22:42.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.217 --rc genhtml_branch_coverage=1 00:22:42.217 --rc genhtml_function_coverage=1 00:22:42.217 --rc genhtml_legend=1 00:22:42.217 --rc geninfo_all_blocks=1 00:22:42.217 --rc geninfo_unexecuted_blocks=1 00:22:42.217 00:22:42.217 ' 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:22:42.217 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:22:42.217 --rc genhtml_branch_coverage=1 00:22:42.217 --rc genhtml_function_coverage=1 00:22:42.217 --rc genhtml_legend=1 00:22:42.217 --rc geninfo_all_blocks=1 00:22:42.217 --rc geninfo_unexecuted_blocks=1 00:22:42.217 00:22:42.217 ' 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@15 -- # shopt -s extglob 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@50 -- # : 0 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:22:42.217 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # prepare_net_devs 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # remove_target_ns 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # xtrace_disable 00:22:42.217 10:40:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@131 -- # pci_devs=() 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@131 -- # local -a pci_devs 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@133 -- # pci_drivers=() 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@135 -- # net_devs=() 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@135 -- # local -ga net_devs 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@136 -- # e810=() 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@136 -- # local -ga e810 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@137 -- # x722=() 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@137 -- # local -ga x722 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@138 -- # mlx=() 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@138 -- # local -ga mlx 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:48.789 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:48.789 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:48.790 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:48.790 Found net devices under 0000:86:00.0: cvl_0_0 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:48.790 Found net devices under 0000:86:00.1: cvl_0_1 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@262 -- # is_hw=yes 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@257 -- # create_target_ns 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@28 -- # local -g _dev 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # ips=() 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772161 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:22:48.790 10.0.0.1 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@11 -- # local val=167772162 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:22:48.790 10.0.0.2 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:22:48.790 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@38 -- # ping_ips 1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:22:48.791 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:48.791 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:22:48.791 00:22:48.791 --- 10.0.0.1 ping statistics --- 00:22:48.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.791 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target0 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:22:48.791 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:48.791 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.135 ms 00:22:48.791 00:22:48.791 --- 10.0.0.2 ping statistics --- 00:22:48.791 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:48.791 rtt min/avg/max/mdev = 0.135/0.135/0.135/0.000 ms 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair++ )) 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # return 0 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=initiator1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # return 1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev= 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@169 -- # return 0 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target0 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:22:48.791 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@107 -- # local dev=target1 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@109 -- # return 1 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@168 -- # dev= 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@169 -- # return 0 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@726 -- # xtrace_disable 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@328 -- # nvmfpid=3308004 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # waitforlisten 3308004 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@835 -- # '[' -z 3308004 ']' 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@840 -- # local max_retries=100 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:48.792 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@844 -- # xtrace_disable 00:22:48.792 10:40:28 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:48.792 [2024-11-20 10:40:28.813055] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:22:48.792 [2024-11-20 10:40:28.813096] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:48.792 [2024-11-20 10:40:28.891885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:48.792 [2024-11-20 10:40:28.933527] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:48.792 [2024-11-20 10:40:28.933565] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:48.792 [2024-11-20 10:40:28.933572] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:48.792 [2024-11-20 10:40:28.933578] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:48.792 [2024-11-20 10:40:28.933582] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:48.792 [2024-11-20 10:40:28.935010] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:22:48.792 [2024-11-20 10:40:28.935120] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:22:48.792 [2024-11-20 10:40:28.935238] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:22:48.792 [2024-11-20 10:40:28.935239] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:22:48.792 10:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:22:48.792 10:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@868 -- # return 0 00:22:48.792 10:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:22:48.792 10:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@732 -- # xtrace_disable 00:22:48.792 10:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:48.792 10:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:48.792 10:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:48.792 10:40:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:51.435 10:40:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:51.435 10:40:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:51.694 10:40:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:51.694 10:40:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:51.952 10:40:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:51.952 10:40:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:51.952 10:40:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:51.952 10:40:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:51.952 10:40:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:52.210 [2024-11-20 10:40:32.703170] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:52.210 10:40:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:52.210 10:40:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:52.210 10:40:32 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:52.468 10:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:52.468 10:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:52.726 10:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:52.985 [2024-11-20 10:40:33.491289] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:52.985 10:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:53.244 10:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:53.244 10:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:53.244 10:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:53.244 10:40:33 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:54.619 Initializing NVMe Controllers 00:22:54.619 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:54.619 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:54.619 Initialization complete. Launching workers. 00:22:54.619 ======================================================== 00:22:54.619 Latency(us) 00:22:54.619 Device Information : IOPS MiB/s Average min max 00:22:54.619 PCIE (0000:5e:00.0) NSID 1 from core 0: 97330.56 380.20 328.31 20.94 4650.72 00:22:54.619 ======================================================== 00:22:54.619 Total : 97330.56 380.20 328.31 20.94 4650.72 00:22:54.619 00:22:54.619 10:40:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:55.555 Initializing NVMe Controllers 00:22:55.555 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:55.555 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:55.555 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:55.555 Initialization complete. Launching workers. 00:22:55.555 ======================================================== 00:22:55.555 Latency(us) 00:22:55.555 Device Information : IOPS MiB/s Average min max 00:22:55.555 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 198.00 0.77 5206.38 109.55 45685.76 00:22:55.555 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 56.00 0.22 17923.90 7193.06 47884.87 00:22:55.555 ======================================================== 00:22:55.555 Total : 254.00 0.99 8010.24 109.55 47884.87 00:22:55.555 00:22:55.555 10:40:36 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:56.931 Initializing NVMe Controllers 00:22:56.931 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:56.931 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:56.931 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:56.931 Initialization complete. Launching workers. 00:22:56.931 ======================================================== 00:22:56.931 Latency(us) 00:22:56.931 Device Information : IOPS MiB/s Average min max 00:22:56.931 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 11028.99 43.08 2910.21 531.57 6355.40 00:22:56.931 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3812.00 14.89 8430.46 5334.51 17125.42 00:22:56.931 ======================================================== 00:22:56.931 Total : 14840.99 57.97 4328.12 531.57 17125.42 00:22:56.931 00:22:56.931 10:40:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:56.931 10:40:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:56.931 10:40:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:59.466 Initializing NVMe Controllers 00:22:59.466 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.466 Controller IO queue size 128, less than required. 00:22:59.466 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:59.466 Controller IO queue size 128, less than required. 00:22:59.466 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:59.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:59.466 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:59.466 Initialization complete. Launching workers. 00:22:59.466 ======================================================== 00:22:59.466 Latency(us) 00:22:59.466 Device Information : IOPS MiB/s Average min max 00:22:59.466 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1711.95 427.99 75673.94 53596.28 138635.61 00:22:59.466 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 613.48 153.37 215947.06 79887.95 327661.97 00:22:59.466 ======================================================== 00:22:59.466 Total : 2325.44 581.36 112679.99 53596.28 327661.97 00:22:59.466 00:22:59.466 10:40:39 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:59.724 No valid NVMe controllers or AIO or URING devices found 00:22:59.724 Initializing NVMe Controllers 00:22:59.724 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:59.724 Controller IO queue size 128, less than required. 00:22:59.724 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:59.724 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:59.724 Controller IO queue size 128, less than required. 00:22:59.724 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:59.724 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:59.724 WARNING: Some requested NVMe devices were skipped 00:22:59.724 10:40:40 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:23:02.258 Initializing NVMe Controllers 00:23:02.258 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:23:02.258 Controller IO queue size 128, less than required. 00:23:02.258 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:02.258 Controller IO queue size 128, less than required. 00:23:02.258 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:23:02.258 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:23:02.258 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:23:02.258 Initialization complete. Launching workers. 00:23:02.258 00:23:02.258 ==================== 00:23:02.258 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:23:02.258 TCP transport: 00:23:02.258 polls: 11016 00:23:02.258 idle_polls: 7749 00:23:02.258 sock_completions: 3267 00:23:02.258 nvme_completions: 6203 00:23:02.258 submitted_requests: 9316 00:23:02.258 queued_requests: 1 00:23:02.258 00:23:02.258 ==================== 00:23:02.258 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:23:02.258 TCP transport: 00:23:02.258 polls: 11247 00:23:02.258 idle_polls: 7921 00:23:02.258 sock_completions: 3326 00:23:02.258 nvme_completions: 6437 00:23:02.258 submitted_requests: 9760 00:23:02.258 queued_requests: 1 00:23:02.258 ======================================================== 00:23:02.258 Latency(us) 00:23:02.258 Device Information : IOPS MiB/s Average min max 00:23:02.258 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 1549.55 387.39 84801.13 58961.98 126500.84 00:23:02.258 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 1608.02 402.00 79420.75 47072.16 112025.68 00:23:02.258 ======================================================== 00:23:02.258 Total : 3157.57 789.39 82061.13 47072.16 126500.84 00:23:02.258 00:23:02.258 10:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:23:02.258 10:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:02.258 10:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:23:02.258 10:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:23:02.258 10:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:23:02.258 10:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:02.258 10:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@99 -- # sync 00:23:02.258 10:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:02.258 10:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@102 -- # set +e 00:23:02.258 10:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:02.258 10:40:42 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:02.258 rmmod nvme_tcp 00:23:02.517 rmmod nvme_fabrics 00:23:02.517 rmmod nvme_keyring 00:23:02.517 10:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:02.517 10:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@106 -- # set -e 00:23:02.517 10:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@107 -- # return 0 00:23:02.517 10:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@336 -- # '[' -n 3308004 ']' 00:23:02.517 10:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@337 -- # killprocess 3308004 00:23:02.517 10:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # '[' -z 3308004 ']' 00:23:02.518 10:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # kill -0 3308004 00:23:02.518 10:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # uname 00:23:02.518 10:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:02.518 10:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3308004 00:23:02.518 10:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:02.518 10:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:02.518 10:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3308004' 00:23:02.518 killing process with pid 3308004 00:23:02.518 10:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@973 -- # kill 3308004 00:23:02.518 10:40:43 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@978 -- # wait 3308004 00:23:05.050 10:40:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:05.050 10:40:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # nvmf_fini 00:23:05.050 10:40:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@264 -- # local dev 00:23:05.050 10:40:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@267 -- # remove_target_ns 00:23:05.050 10:40:45 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:05.050 10:40:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:05.050 10:40:45 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@268 -- # delete_main_bridge 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@130 -- # return 0 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # _dev=0 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@41 -- # dev_map=() 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/setup.sh@284 -- # iptr 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@542 -- # iptables-save 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@542 -- # iptables-restore 00:23:06.956 00:23:06.956 real 0m24.733s 00:23:06.956 user 1m4.504s 00:23:06.956 sys 0m8.338s 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:23:06.956 ************************************ 00:23:06.956 END TEST nvmf_perf 00:23:06.956 ************************************ 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:06.956 ************************************ 00:23:06.956 START TEST nvmf_fio_host 00:23:06.956 ************************************ 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:23:06.956 * Looking for test storage... 00:23:06.956 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lcov --version 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # IFS=.-: 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@336 -- # read -ra ver1 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # IFS=.-: 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@337 -- # read -ra ver2 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@338 -- # local 'op=<' 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@340 -- # ver1_l=2 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@341 -- # ver2_l=1 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@344 -- # case "$op" in 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@345 -- # : 1 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # decimal 1 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=1 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 1 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@365 -- # ver1[v]=1 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # decimal 2 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@353 -- # local d=2 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@355 -- # echo 2 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@366 -- # ver2[v]=2 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@368 -- # return 0 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:06.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.956 --rc genhtml_branch_coverage=1 00:23:06.956 --rc genhtml_function_coverage=1 00:23:06.956 --rc genhtml_legend=1 00:23:06.956 --rc geninfo_all_blocks=1 00:23:06.956 --rc geninfo_unexecuted_blocks=1 00:23:06.956 00:23:06.956 ' 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:06.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.956 --rc genhtml_branch_coverage=1 00:23:06.956 --rc genhtml_function_coverage=1 00:23:06.956 --rc genhtml_legend=1 00:23:06.956 --rc geninfo_all_blocks=1 00:23:06.956 --rc geninfo_unexecuted_blocks=1 00:23:06.956 00:23:06.956 ' 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:06.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.956 --rc genhtml_branch_coverage=1 00:23:06.956 --rc genhtml_function_coverage=1 00:23:06.956 --rc genhtml_legend=1 00:23:06.956 --rc geninfo_all_blocks=1 00:23:06.956 --rc geninfo_unexecuted_blocks=1 00:23:06.956 00:23:06.956 ' 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:06.956 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:06.956 --rc genhtml_branch_coverage=1 00:23:06.956 --rc genhtml_function_coverage=1 00:23:06.956 --rc genhtml_legend=1 00:23:06.956 --rc geninfo_all_blocks=1 00:23:06.956 --rc geninfo_unexecuted_blocks=1 00:23:06.956 00:23:06.956 ' 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.956 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@15 -- # shopt -s extglob 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@50 -- # : 0 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:23:06.957 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # remove_target_ns 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # xtrace_disable 00:23:06.957 10:40:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@131 -- # pci_devs=() 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@131 -- # local -a pci_devs 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@132 -- # pci_net_devs=() 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@133 -- # pci_drivers=() 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@133 -- # local -A pci_drivers 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@135 -- # net_devs=() 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@135 -- # local -ga net_devs 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@136 -- # e810=() 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@136 -- # local -ga e810 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@137 -- # x722=() 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@137 -- # local -ga x722 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@138 -- # mlx=() 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@138 -- # local -ga mlx 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:13.525 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:13.525 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:13.525 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:13.526 Found net devices under 0000:86:00.0: cvl_0_0 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:13.526 Found net devices under 0000:86:00.1: cvl_0_1 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@262 -- # is_hw=yes 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@257 -- # create_target_ns 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@28 -- # local -g _dev 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # ips=() 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772161 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:23:13.526 10.0.0.1 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@11 -- # local val=167772162 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:23:13.526 10.0.0.2 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:23:13.526 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@38 -- # ping_ips 1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:13.527 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:13.527 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.513 ms 00:23:13.527 00:23:13.527 --- 10.0.0.1 ping statistics --- 00:23:13.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.527 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target0 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:23:13.527 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:13.527 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.217 ms 00:23:13.527 00:23:13.527 --- 10.0.0.2 ping statistics --- 00:23:13.527 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:13.527 rtt min/avg/max/mdev = 0.217/0.217/0.217/0.000 ms 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair++ )) 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # return 0 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=initiator1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # return 1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev= 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@169 -- # return 0 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target0 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:13.527 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # get_net_dev target1 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@107 -- # local dev=target1 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@109 -- # return 1 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@168 -- # dev= 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@169 -- # return 0 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=3314143 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 3314143 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@835 -- # '[' -z 3314143 ']' 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:13.528 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:13.528 10:40:53 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:13.528 [2024-11-20 10:40:53.701352] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:23:13.528 [2024-11-20 10:40:53.701406] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:13.528 [2024-11-20 10:40:53.784249] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:23:13.528 [2024-11-20 10:40:53.827043] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:13.528 [2024-11-20 10:40:53.827081] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:13.528 [2024-11-20 10:40:53.827089] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:13.528 [2024-11-20 10:40:53.827096] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:13.528 [2024-11-20 10:40:53.827102] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:13.528 [2024-11-20 10:40:53.828801] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:13.528 [2024-11-20 10:40:53.828907] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:13.528 [2024-11-20 10:40:53.829015] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:13.528 [2024-11-20 10:40:53.829016] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:14.092 10:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:14.092 10:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@868 -- # return 0 00:23:14.092 10:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:14.092 [2024-11-20 10:40:54.707286] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:14.092 10:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:14.092 10:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:14.092 10:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:14.092 10:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:14.349 Malloc1 00:23:14.349 10:40:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:14.607 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:14.865 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:14.865 [2024-11-20 10:40:55.577244] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:15.123 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:15.380 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:15.380 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:15.380 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:15.380 10:40:55 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:15.638 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:15.638 fio-3.35 00:23:15.638 Starting 1 thread 00:23:18.168 00:23:18.168 test: (groupid=0, jobs=1): err= 0: pid=3314741: Wed Nov 20 10:40:58 2024 00:23:18.168 read: IOPS=11.9k, BW=46.4MiB/s (48.6MB/s)(93.0MiB/2005msec) 00:23:18.168 slat (nsec): min=1526, max=242209, avg=1746.62, stdev=2197.09 00:23:18.168 clat (usec): min=3137, max=10342, avg=5957.05, stdev=436.10 00:23:18.168 lat (usec): min=3168, max=10344, avg=5958.80, stdev=436.00 00:23:18.168 clat percentiles (usec): 00:23:18.168 | 1.00th=[ 4883], 5.00th=[ 5276], 10.00th=[ 5407], 20.00th=[ 5604], 00:23:18.168 | 30.00th=[ 5735], 40.00th=[ 5866], 50.00th=[ 5997], 60.00th=[ 6063], 00:23:18.168 | 70.00th=[ 6194], 80.00th=[ 6325], 90.00th=[ 6456], 95.00th=[ 6652], 00:23:18.168 | 99.00th=[ 6915], 99.50th=[ 7046], 99.90th=[ 8586], 99.95th=[ 9110], 00:23:18.168 | 99.99th=[10028] 00:23:18.168 bw ( KiB/s): min=46456, max=48032, per=99.98%, avg=47474.00, stdev=728.67, samples=4 00:23:18.168 iops : min=11614, max=12008, avg=11868.50, stdev=182.17, samples=4 00:23:18.168 write: IOPS=11.8k, BW=46.2MiB/s (48.4MB/s)(92.6MiB/2005msec); 0 zone resets 00:23:18.168 slat (nsec): min=1563, max=226717, avg=1809.11, stdev=1653.87 00:23:18.168 clat (usec): min=2442, max=9185, avg=4810.04, stdev=368.04 00:23:18.168 lat (usec): min=2458, max=9187, avg=4811.85, stdev=368.04 00:23:18.168 clat percentiles (usec): 00:23:18.168 | 1.00th=[ 3949], 5.00th=[ 4228], 10.00th=[ 4359], 20.00th=[ 4555], 00:23:18.168 | 30.00th=[ 4621], 40.00th=[ 4752], 50.00th=[ 4817], 60.00th=[ 4883], 00:23:18.168 | 70.00th=[ 5014], 80.00th=[ 5080], 90.00th=[ 5211], 95.00th=[ 5342], 00:23:18.168 | 99.00th=[ 5604], 99.50th=[ 5735], 99.90th=[ 7701], 99.95th=[ 8586], 00:23:18.168 | 99.99th=[ 9110] 00:23:18.168 bw ( KiB/s): min=46856, max=47800, per=99.98%, avg=47266.00, stdev=412.16, samples=4 00:23:18.168 iops : min=11712, max=11952, avg=11816.50, stdev=104.57, samples=4 00:23:18.168 lat (msec) : 4=0.69%, 10=99.31%, 20=0.01% 00:23:18.168 cpu : usr=72.21%, sys=26.70%, ctx=114, majf=0, minf=3 00:23:18.168 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:18.168 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:18.168 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:18.168 issued rwts: total=23801,23696,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:18.168 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:18.168 00:23:18.168 Run status group 0 (all jobs): 00:23:18.168 READ: bw=46.4MiB/s (48.6MB/s), 46.4MiB/s-46.4MiB/s (48.6MB/s-48.6MB/s), io=93.0MiB (97.5MB), run=2005-2005msec 00:23:18.168 WRITE: bw=46.2MiB/s (48.4MB/s), 46.2MiB/s-46.2MiB/s (48.4MB/s-48.4MB/s), io=92.6MiB (97.1MB), run=2005-2005msec 00:23:18.168 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1364 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local sanitizers 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # shift 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1347 -- # local asan_lib= 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libasan 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1349 -- # asan_lib= 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:18.169 10:40:58 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:18.169 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:18.169 fio-3.35 00:23:18.169 Starting 1 thread 00:23:20.702 00:23:20.702 test: (groupid=0, jobs=1): err= 0: pid=3315314: Wed Nov 20 10:41:01 2024 00:23:20.702 read: IOPS=11.0k, BW=171MiB/s (179MB/s)(343MiB/2005msec) 00:23:20.702 slat (nsec): min=2461, max=86841, avg=2854.07, stdev=1331.50 00:23:20.702 clat (usec): min=1741, max=12886, avg=6701.07, stdev=1577.79 00:23:20.702 lat (usec): min=1744, max=12892, avg=6703.93, stdev=1577.90 00:23:20.702 clat percentiles (usec): 00:23:20.702 | 1.00th=[ 3621], 5.00th=[ 4228], 10.00th=[ 4621], 20.00th=[ 5276], 00:23:20.702 | 30.00th=[ 5800], 40.00th=[ 6259], 50.00th=[ 6718], 60.00th=[ 7111], 00:23:20.702 | 70.00th=[ 7504], 80.00th=[ 7963], 90.00th=[ 8717], 95.00th=[ 9503], 00:23:20.702 | 99.00th=[10683], 99.50th=[11076], 99.90th=[11863], 99.95th=[11994], 00:23:20.702 | 99.99th=[12387] 00:23:20.702 bw ( KiB/s): min=85536, max=93824, per=50.96%, avg=89328.00, stdev=4223.72, samples=4 00:23:20.702 iops : min= 5346, max= 5864, avg=5583.00, stdev=263.98, samples=4 00:23:20.702 write: IOPS=6485, BW=101MiB/s (106MB/s)(183MiB/1801msec); 0 zone resets 00:23:20.702 slat (usec): min=28, max=388, avg=32.09, stdev= 7.33 00:23:20.702 clat (usec): min=4045, max=15903, avg=8613.82, stdev=1495.15 00:23:20.702 lat (usec): min=4075, max=15933, avg=8645.91, stdev=1496.36 00:23:20.702 clat percentiles (usec): 00:23:20.702 | 1.00th=[ 5932], 5.00th=[ 6521], 10.00th=[ 6849], 20.00th=[ 7308], 00:23:20.702 | 30.00th=[ 7701], 40.00th=[ 8094], 50.00th=[ 8455], 60.00th=[ 8848], 00:23:20.702 | 70.00th=[ 9241], 80.00th=[ 9765], 90.00th=[10552], 95.00th=[11338], 00:23:20.702 | 99.00th=[12780], 99.50th=[13435], 99.90th=[15008], 99.95th=[15664], 00:23:20.702 | 99.99th=[15926] 00:23:20.702 bw ( KiB/s): min=89088, max=97312, per=89.41%, avg=92784.00, stdev=4179.85, samples=4 00:23:20.702 iops : min= 5568, max= 6082, avg=5799.00, stdev=261.24, samples=4 00:23:20.702 lat (msec) : 2=0.01%, 4=1.93%, 10=90.49%, 20=7.56% 00:23:20.702 cpu : usr=86.58%, sys=12.72%, ctx=37, majf=0, minf=3 00:23:20.702 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.3%, 32=0.7%, >=64=98.7% 00:23:20.702 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:20.702 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:20.702 issued rwts: total=21965,11681,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:20.702 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:20.702 00:23:20.702 Run status group 0 (all jobs): 00:23:20.702 READ: bw=171MiB/s (179MB/s), 171MiB/s-171MiB/s (179MB/s-179MB/s), io=343MiB (360MB), run=2005-2005msec 00:23:20.702 WRITE: bw=101MiB/s (106MB/s), 101MiB/s-101MiB/s (106MB/s-106MB/s), io=183MiB (191MB), run=1801-1801msec 00:23:20.702 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@99 -- # sync 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@102 -- # set +e 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:20.962 rmmod nvme_tcp 00:23:20.962 rmmod nvme_fabrics 00:23:20.962 rmmod nvme_keyring 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@106 -- # set -e 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@107 -- # return 0 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@336 -- # '[' -n 3314143 ']' 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@337 -- # killprocess 3314143 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # '[' -z 3314143 ']' 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # kill -0 3314143 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # uname 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3314143 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3314143' 00:23:20.962 killing process with pid 3314143 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@973 -- # kill 3314143 00:23:20.962 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@978 -- # wait 3314143 00:23:21.221 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:21.221 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # nvmf_fini 00:23:21.221 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@264 -- # local dev 00:23:21.221 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@267 -- # remove_target_ns 00:23:21.221 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:21.221 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:21.221 10:41:01 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@268 -- # delete_main_bridge 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@130 -- # return 0 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:23:23.758 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # _dev=0 00:23:23.759 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@41 -- # dev_map=() 00:23:23.759 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/setup.sh@284 -- # iptr 00:23:23.759 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@542 -- # iptables-save 00:23:23.759 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:23:23.759 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@542 -- # iptables-restore 00:23:23.759 00:23:23.759 real 0m16.588s 00:23:23.759 user 0m49.066s 00:23:23.759 sys 0m6.716s 00:23:23.759 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:23:23.759 10:41:03 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.759 ************************************ 00:23:23.759 END TEST nvmf_fio_host 00:23:23.759 ************************************ 00:23:23.759 10:41:03 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:23.759 10:41:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:23:23.759 10:41:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:23:23.759 10:41:03 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:23.759 ************************************ 00:23:23.759 START TEST nvmf_failover 00:23:23.759 ************************************ 00:23:23.759 10:41:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:23.759 * Looking for test storage... 00:23:23.759 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lcov --version 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@333 -- # local ver1 ver1_l 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@334 -- # local ver2 ver2_l 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # IFS=.-: 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@336 -- # read -ra ver1 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # IFS=.-: 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@337 -- # read -ra ver2 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@338 -- # local 'op=<' 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@340 -- # ver1_l=2 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@341 -- # ver2_l=1 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@344 -- # case "$op" in 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@345 -- # : 1 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v = 0 )) 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # decimal 1 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=1 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 1 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@365 -- # ver1[v]=1 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # decimal 2 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@353 -- # local d=2 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@355 -- # echo 2 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@366 -- # ver2[v]=2 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@368 -- # return 0 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:23:23.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.759 --rc genhtml_branch_coverage=1 00:23:23.759 --rc genhtml_function_coverage=1 00:23:23.759 --rc genhtml_legend=1 00:23:23.759 --rc geninfo_all_blocks=1 00:23:23.759 --rc geninfo_unexecuted_blocks=1 00:23:23.759 00:23:23.759 ' 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:23:23.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.759 --rc genhtml_branch_coverage=1 00:23:23.759 --rc genhtml_function_coverage=1 00:23:23.759 --rc genhtml_legend=1 00:23:23.759 --rc geninfo_all_blocks=1 00:23:23.759 --rc geninfo_unexecuted_blocks=1 00:23:23.759 00:23:23.759 ' 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:23:23.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.759 --rc genhtml_branch_coverage=1 00:23:23.759 --rc genhtml_function_coverage=1 00:23:23.759 --rc genhtml_legend=1 00:23:23.759 --rc geninfo_all_blocks=1 00:23:23.759 --rc geninfo_unexecuted_blocks=1 00:23:23.759 00:23:23.759 ' 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:23:23.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:23:23.759 --rc genhtml_branch_coverage=1 00:23:23.759 --rc genhtml_function_coverage=1 00:23:23.759 --rc genhtml_legend=1 00:23:23.759 --rc geninfo_all_blocks=1 00:23:23.759 --rc geninfo_unexecuted_blocks=1 00:23:23.759 00:23:23.759 ' 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@15 -- # shopt -s extglob 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:23.759 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@50 -- # : 0 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:23:23.760 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@54 -- # have_pci_nics=0 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # prepare_net_devs 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # local -g is_hw=no 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # remove_target_ns 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # xtrace_disable 00:23:23.760 10:41:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@131 -- # pci_devs=() 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@131 -- # local -a pci_devs 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@132 -- # pci_net_devs=() 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@133 -- # pci_drivers=() 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@133 -- # local -A pci_drivers 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@135 -- # net_devs=() 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@135 -- # local -ga net_devs 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@136 -- # e810=() 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@136 -- # local -ga e810 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@137 -- # x722=() 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@137 -- # local -ga x722 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@138 -- # mlx=() 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@138 -- # local -ga mlx 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:30.329 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:30.329 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:30.329 Found net devices under 0000:86:00.0: cvl_0_0 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # [[ up == up ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:30.329 Found net devices under 0000:86:00.1: cvl_0_1 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@262 -- # is_hw=yes 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:23:30.329 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@257 -- # create_target_ns 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@27 -- # local -gA dev_map 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@28 -- # local -g _dev 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # ips=() 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772161 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:23:30.330 10.0.0.1 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@11 -- # local val=167772162 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:23:30.330 10:41:09 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:23:30.330 10.0.0.2 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:23:30.330 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@38 -- # ping_ips 1 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=initiator0 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:23:30.331 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:30.331 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.522 ms 00:23:30.331 00:23:30.331 --- 10.0.0.1 ping statistics --- 00:23:30.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.331 rtt min/avg/max/mdev = 0.522/0.522/0.522/0.000 ms 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target0 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target0 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:23:30.331 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:30.331 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:23:30.331 00:23:30.331 --- 10.0.0.2 ping statistics --- 00:23:30.331 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:30.331 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair++ )) 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # return 0 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=initiator0 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:23:30.331 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=initiator1 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # return 1 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev= 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@169 -- # return 0 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target0 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target0 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # get_net_dev target1 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@107 -- # local dev=target1 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@109 -- # return 1 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@168 -- # dev= 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@169 -- # return 0 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@726 -- # xtrace_disable 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@328 -- # nvmfpid=3319310 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # waitforlisten 3319310 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3319310 ']' 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:30.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:30.332 [2024-11-20 10:41:10.331186] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:23:30.332 [2024-11-20 10:41:10.331244] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:30.332 [2024-11-20 10:41:10.410148] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:30.332 [2024-11-20 10:41:10.451807] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:30.332 [2024-11-20 10:41:10.451839] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:30.332 [2024-11-20 10:41:10.451846] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:30.332 [2024-11-20 10:41:10.451852] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:30.332 [2024-11-20 10:41:10.451857] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:30.332 [2024-11-20 10:41:10.453186] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:23:30.332 [2024-11-20 10:41:10.453293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:23:30.332 [2024-11-20 10:41:10.453293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@732 -- # xtrace_disable 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:30.332 [2024-11-20 10:41:10.758046] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:30.332 10:41:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:30.332 Malloc0 00:23:30.332 10:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:30.592 10:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:30.851 10:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:31.110 [2024-11-20 10:41:11.584661] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:31.110 10:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:31.110 [2024-11-20 10:41:11.769143] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:31.110 10:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:31.369 [2024-11-20 10:41:11.953707] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:31.369 10:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:31.369 10:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=3319572 00:23:31.369 10:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:31.369 10:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 3319572 /var/tmp/bdevperf.sock 00:23:31.369 10:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3319572 ']' 00:23:31.369 10:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:31.369 10:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:31.369 10:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:31.369 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:31.369 10:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:31.369 10:41:11 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:31.628 10:41:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:31.628 10:41:12 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:31.628 10:41:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:31.887 NVMe0n1 00:23:31.887 10:41:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:32.146 00:23:32.146 10:41:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=3319797 00:23:32.146 10:41:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:32.146 10:41:12 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:33.521 10:41:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:33.521 [2024-11-20 10:41:14.050072] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f2d0 is same with the state(6) to be set 00:23:33.521 [2024-11-20 10:41:14.050121] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f2d0 is same with the state(6) to be set 00:23:33.521 [2024-11-20 10:41:14.050129] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f2d0 is same with the state(6) to be set 00:23:33.521 [2024-11-20 10:41:14.050136] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f2d0 is same with the state(6) to be set 00:23:33.521 [2024-11-20 10:41:14.050142] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f2d0 is same with the state(6) to be set 00:23:33.521 [2024-11-20 10:41:14.050148] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f2d0 is same with the state(6) to be set 00:23:33.521 [2024-11-20 10:41:14.050155] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f2d0 is same with the state(6) to be set 00:23:33.521 [2024-11-20 10:41:14.050161] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f2d0 is same with the state(6) to be set 00:23:33.521 [2024-11-20 10:41:14.050166] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f2d0 is same with the state(6) to be set 00:23:33.521 [2024-11-20 10:41:14.050172] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f2d0 is same with the state(6) to be set 00:23:33.521 [2024-11-20 10:41:14.050177] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f2d0 is same with the state(6) to be set 00:23:33.521 [2024-11-20 10:41:14.050183] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f2d0 is same with the state(6) to be set 00:23:33.521 [2024-11-20 10:41:14.050190] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f2d0 is same with the state(6) to be set 00:23:33.521 [2024-11-20 10:41:14.050195] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x113f2d0 is same with the state(6) to be set 00:23:33.521 10:41:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:36.809 10:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:36.809 00:23:37.069 10:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:37.069 [2024-11-20 10:41:17.709540] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140060 is same with the state(6) to be set 00:23:37.069 [2024-11-20 10:41:17.709578] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140060 is same with the state(6) to be set 00:23:37.069 [2024-11-20 10:41:17.709585] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140060 is same with the state(6) to be set 00:23:37.069 [2024-11-20 10:41:17.709591] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140060 is same with the state(6) to be set 00:23:37.069 [2024-11-20 10:41:17.709598] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140060 is same with the state(6) to be set 00:23:37.069 [2024-11-20 10:41:17.709604] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140060 is same with the state(6) to be set 00:23:37.069 [2024-11-20 10:41:17.709609] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140060 is same with the state(6) to be set 00:23:37.069 [2024-11-20 10:41:17.709616] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140060 is same with the state(6) to be set 00:23:37.069 [2024-11-20 10:41:17.709621] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140060 is same with the state(6) to be set 00:23:37.069 [2024-11-20 10:41:17.709627] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140060 is same with the state(6) to be set 00:23:37.069 [2024-11-20 10:41:17.709633] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140060 is same with the state(6) to be set 00:23:37.069 [2024-11-20 10:41:17.709638] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140060 is same with the state(6) to be set 00:23:37.069 [2024-11-20 10:41:17.709644] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140060 is same with the state(6) to be set 00:23:37.069 [2024-11-20 10:41:17.709650] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140060 is same with the state(6) to be set 00:23:37.069 [2024-11-20 10:41:17.709656] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140060 is same with the state(6) to be set 00:23:37.069 [2024-11-20 10:41:17.709661] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140060 is same with the state(6) to be set 00:23:37.069 [2024-11-20 10:41:17.709667] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140060 is same with the state(6) to be set 00:23:37.069 [2024-11-20 10:41:17.709672] tcp.c:1773:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1140060 is same with the state(6) to be set 00:23:37.069 10:41:17 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:40.356 10:41:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:40.356 [2024-11-20 10:41:20.926589] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:40.356 10:41:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:41.293 10:41:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:41.553 10:41:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 3319797 00:23:48.127 { 00:23:48.127 "results": [ 00:23:48.127 { 00:23:48.127 "job": "NVMe0n1", 00:23:48.127 "core_mask": "0x1", 00:23:48.127 "workload": "verify", 00:23:48.127 "status": "finished", 00:23:48.127 "verify_range": { 00:23:48.127 "start": 0, 00:23:48.127 "length": 16384 00:23:48.127 }, 00:23:48.127 "queue_depth": 128, 00:23:48.127 "io_size": 4096, 00:23:48.127 "runtime": 15.008269, 00:23:48.127 "iops": 11097.015918358073, 00:23:48.127 "mibps": 43.34771843108622, 00:23:48.127 "io_failed": 18085, 00:23:48.127 "io_timeout": 0, 00:23:48.127 "avg_latency_us": 10383.691824561187, 00:23:48.127 "min_latency_us": 440.807619047619, 00:23:48.127 "max_latency_us": 21595.67238095238 00:23:48.127 } 00:23:48.127 ], 00:23:48.127 "core_count": 1 00:23:48.127 } 00:23:48.127 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 3319572 00:23:48.127 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3319572 ']' 00:23:48.127 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3319572 00:23:48.127 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:48.127 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:48.127 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3319572 00:23:48.127 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:48.127 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:48.127 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3319572' 00:23:48.127 killing process with pid 3319572 00:23:48.127 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3319572 00:23:48.127 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3319572 00:23:48.127 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:48.127 [2024-11-20 10:41:12.030348] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:23:48.127 [2024-11-20 10:41:12.030400] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3319572 ] 00:23:48.127 [2024-11-20 10:41:12.102586] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:48.127 [2024-11-20 10:41:12.144097] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:48.127 Running I/O for 15 seconds... 00:23:48.127 11234.00 IOPS, 43.88 MiB/s [2024-11-20T09:41:28.858Z] [2024-11-20 10:41:14.050653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:100216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.127 [2024-11-20 10:41:14.050686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.127 [2024-11-20 10:41:14.050701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.127 [2024-11-20 10:41:14.050709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.127 [2024-11-20 10:41:14.050718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.127 [2024-11-20 10:41:14.050724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.127 [2024-11-20 10:41:14.050734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.127 [2024-11-20 10:41:14.050740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.127 [2024-11-20 10:41:14.050748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.127 [2024-11-20 10:41:14.050755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.127 [2024-11-20 10:41:14.050763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.127 [2024-11-20 10:41:14.050769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.127 [2024-11-20 10:41:14.050777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.127 [2024-11-20 10:41:14.050784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.127 [2024-11-20 10:41:14.050792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.127 [2024-11-20 10:41:14.050798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.127 [2024-11-20 10:41:14.050806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.127 [2024-11-20 10:41:14.050813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.127 [2024-11-20 10:41:14.050820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.127 [2024-11-20 10:41:14.050826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.127 [2024-11-20 10:41:14.050834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.127 [2024-11-20 10:41:14.050841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.127 [2024-11-20 10:41:14.050854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.127 [2024-11-20 10:41:14.050861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.127 [2024-11-20 10:41:14.050869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.127 [2024-11-20 10:41:14.050876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.127 [2024-11-20 10:41:14.050883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.127 [2024-11-20 10:41:14.050890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.050898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.050904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.050912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.050918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.050926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.050933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.050941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.050947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.050955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.050961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.050968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.050975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.050982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.050988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.050996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:99536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:99552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:99568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:99592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:99600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:99616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:99648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:99696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.128 [2024-11-20 10:41:14.051429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.128 [2024-11-20 10:41:14.051437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:99720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:99728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:99744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:99768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:99776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:99792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:99808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:99816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:99824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:99832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:99840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:99848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:99864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:99872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:99896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:99904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:99912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:99920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:99936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:99968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:99976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:99992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:100000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:100008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:100016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.051986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.129 [2024-11-20 10:41:14.051994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:100024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.129 [2024-11-20 10:41:14.052000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:100032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:100040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:100048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052043] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:100056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:100064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:100072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:100080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:100088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:100096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:100104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:100112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:100120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:100128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:100136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:100144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:100152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:100160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:100168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:100176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:100184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:100192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:100200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:100208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.130 [2024-11-20 10:41:14.052338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:100224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.130 [2024-11-20 10:41:14.052352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:100232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.130 [2024-11-20 10:41:14.052366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:100240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.130 [2024-11-20 10:41:14.052380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:100248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.130 [2024-11-20 10:41:14.052394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:100256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.130 [2024-11-20 10:41:14.052409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:100264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.130 [2024-11-20 10:41:14.052423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:100272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.130 [2024-11-20 10:41:14.052437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:100280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.130 [2024-11-20 10:41:14.052452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:100288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.130 [2024-11-20 10:41:14.052466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.130 [2024-11-20 10:41:14.052474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:100296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.130 [2024-11-20 10:41:14.052481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:14.052488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:100304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:14.052494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:14.052503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:100312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:14.052510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:14.052517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:100320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:14.052523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:14.052531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:100328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:14.052538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:14.052557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.131 [2024-11-20 10:41:14.052563] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.131 [2024-11-20 10:41:14.052570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:100336 len:8 PRP1 0x0 PRP2 0x0 00:23:48.131 [2024-11-20 10:41:14.052578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:14.052621] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:48.131 [2024-11-20 10:41:14.052643] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.131 [2024-11-20 10:41:14.052650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:14.052658] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.131 [2024-11-20 10:41:14.052664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:14.052671] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.131 [2024-11-20 10:41:14.052678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:14.052684] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.131 [2024-11-20 10:41:14.052691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:14.052697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] in failed state. 00:23:48.131 [2024-11-20 10:41:14.055480] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:23:48.131 [2024-11-20 10:41:14.055507] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2528340 (9): Bad file descriptor 00:23:48.131 [2024-11-20 10:41:14.200351] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Resetting controller successful. 00:23:48.131 10498.00 IOPS, 41.01 MiB/s [2024-11-20T09:41:28.862Z] 10839.67 IOPS, 42.34 MiB/s [2024-11-20T09:41:28.862Z] 11011.50 IOPS, 43.01 MiB/s [2024-11-20T09:41:28.862Z] [2024-11-20 10:41:17.710220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:84768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.131 [2024-11-20 10:41:17.710254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:84776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.131 [2024-11-20 10:41:17.710281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:84784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.131 [2024-11-20 10:41:17.710297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:84792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.131 [2024-11-20 10:41:17.710311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:84800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.131 [2024-11-20 10:41:17.710326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:84808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.131 [2024-11-20 10:41:17.710340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:84816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.131 [2024-11-20 10:41:17.710355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:84824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.131 [2024-11-20 10:41:17.710369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:84832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.131 [2024-11-20 10:41:17.710383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:84848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:17.710397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:84856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:17.710411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:84864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:17.710426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:84872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:17.710441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:84880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:17.710455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:84888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:17.710471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:84896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:17.710485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:84840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.131 [2024-11-20 10:41:17.710501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:84904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:17.710516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:84912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:17.710530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:84920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:17.710544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:84928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:17.710559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:84936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:17.710573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:84944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:17.710586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:84952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:17.710601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:84960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:17.710616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:84968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.131 [2024-11-20 10:41:17.710630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.131 [2024-11-20 10:41:17.710637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:84976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:84984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:84992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:85000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:85008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:85016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:85024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:85032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:85040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:85048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:85056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:85064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:85072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:85080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:85088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:85096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:85104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:85112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:85120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:85128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:85136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:85144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:85152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:85160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:85168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.710992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:85176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.710998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.711008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:85184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.711014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.711022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:85192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.711028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.711036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.711042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.711050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:85208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.711056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.711064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:85216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.711070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.711078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:85224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.711084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.132 [2024-11-20 10:41:17.711092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:85232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.132 [2024-11-20 10:41:17.711099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:85240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:85248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:85256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:85264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:85272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:85280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:85288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:85296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:85304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:85312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:85320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:85328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:85336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:85344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:85352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:85360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:85368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:85376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:85384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:85392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:85400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:85408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:85416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:85424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:85432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:85440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:85448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:85456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:85464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:85472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:85480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:85488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:85496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:85504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:85512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.133 [2024-11-20 10:41:17.711615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:85520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.133 [2024-11-20 10:41:17.711622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:85528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.134 [2024-11-20 10:41:17.711636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:85536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.134 [2024-11-20 10:41:17.711653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:85544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.134 [2024-11-20 10:41:17.711667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:85552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.134 [2024-11-20 10:41:17.711681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:85560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.134 [2024-11-20 10:41:17.711696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:85568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.134 [2024-11-20 10:41:17.711710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:85576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.134 [2024-11-20 10:41:17.711724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:85584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.134 [2024-11-20 10:41:17.711740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:85592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.134 [2024-11-20 10:41:17.711755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:85600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.134 [2024-11-20 10:41:17.711769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:85608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.134 [2024-11-20 10:41:17.711783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:85616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.134 [2024-11-20 10:41:17.711797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:85624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.134 [2024-11-20 10:41:17.711811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:85632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.134 [2024-11-20 10:41:17.711825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:85640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.134 [2024-11-20 10:41:17.711839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:85648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.134 [2024-11-20 10:41:17.711854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:85656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.134 [2024-11-20 10:41:17.711868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:85664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.134 [2024-11-20 10:41:17.711883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:85672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.134 [2024-11-20 10:41:17.711898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711921] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.134 [2024-11-20 10:41:17.711928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85680 len:8 PRP1 0x0 PRP2 0x0 00:23:48.134 [2024-11-20 10:41:17.711934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711946] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.134 [2024-11-20 10:41:17.711952] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.134 [2024-11-20 10:41:17.711957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85688 len:8 PRP1 0x0 PRP2 0x0 00:23:48.134 [2024-11-20 10:41:17.711963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711970] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.134 [2024-11-20 10:41:17.711975] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.134 [2024-11-20 10:41:17.711980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85696 len:8 PRP1 0x0 PRP2 0x0 00:23:48.134 [2024-11-20 10:41:17.711987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.711994] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.134 [2024-11-20 10:41:17.711999] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.134 [2024-11-20 10:41:17.712008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85704 len:8 PRP1 0x0 PRP2 0x0 00:23:48.134 [2024-11-20 10:41:17.712014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.712020] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.134 [2024-11-20 10:41:17.712025] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.134 [2024-11-20 10:41:17.712031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85712 len:8 PRP1 0x0 PRP2 0x0 00:23:48.134 [2024-11-20 10:41:17.712037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.712043] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.134 [2024-11-20 10:41:17.712048] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.134 [2024-11-20 10:41:17.712054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85720 len:8 PRP1 0x0 PRP2 0x0 00:23:48.134 [2024-11-20 10:41:17.712060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.712067] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.134 [2024-11-20 10:41:17.712072] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.134 [2024-11-20 10:41:17.712077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85728 len:8 PRP1 0x0 PRP2 0x0 00:23:48.134 [2024-11-20 10:41:17.712083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.712090] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.134 [2024-11-20 10:41:17.712095] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.134 [2024-11-20 10:41:17.712103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85736 len:8 PRP1 0x0 PRP2 0x0 00:23:48.134 [2024-11-20 10:41:17.712109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.712116] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.134 [2024-11-20 10:41:17.712120] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.134 [2024-11-20 10:41:17.712126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85744 len:8 PRP1 0x0 PRP2 0x0 00:23:48.134 [2024-11-20 10:41:17.712134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.712140] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.134 [2024-11-20 10:41:17.712145] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.134 [2024-11-20 10:41:17.712151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85752 len:8 PRP1 0x0 PRP2 0x0 00:23:48.134 [2024-11-20 10:41:17.712157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.712163] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.134 [2024-11-20 10:41:17.712168] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.134 [2024-11-20 10:41:17.712173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85760 len:8 PRP1 0x0 PRP2 0x0 00:23:48.134 [2024-11-20 10:41:17.712180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.134 [2024-11-20 10:41:17.712186] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.134 [2024-11-20 10:41:17.712191] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.134 [2024-11-20 10:41:17.712197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85768 len:8 PRP1 0x0 PRP2 0x0 00:23:48.135 [2024-11-20 10:41:17.712207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:17.712214] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.135 [2024-11-20 10:41:17.712219] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.135 [2024-11-20 10:41:17.712225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85776 len:8 PRP1 0x0 PRP2 0x0 00:23:48.135 [2024-11-20 10:41:17.712231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:17.712238] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.135 [2024-11-20 10:41:17.712242] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.135 [2024-11-20 10:41:17.712248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:85784 len:8 PRP1 0x0 PRP2 0x0 00:23:48.135 [2024-11-20 10:41:17.712254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:17.712295] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:48.135 [2024-11-20 10:41:17.712317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.135 [2024-11-20 10:41:17.712325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:17.712332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.135 [2024-11-20 10:41:17.712339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:17.712346] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.135 [2024-11-20 10:41:17.712353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:17.712361] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.135 [2024-11-20 10:41:17.712369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:17.712376] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 3] in failed state. 00:23:48.135 [2024-11-20 10:41:17.712406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2528340 (9): Bad file descriptor 00:23:48.135 [2024-11-20 10:41:17.715124] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 3] resetting controller 00:23:48.135 [2024-11-20 10:41:17.866457] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Resetting controller successful. 00:23:48.135 10706.00 IOPS, 41.82 MiB/s [2024-11-20T09:41:28.866Z] 10825.67 IOPS, 42.29 MiB/s [2024-11-20T09:41:28.866Z] 10923.14 IOPS, 42.67 MiB/s [2024-11-20T09:41:28.866Z] 10976.50 IOPS, 42.88 MiB/s [2024-11-20T09:41:28.866Z] 11019.56 IOPS, 43.05 MiB/s [2024-11-20T09:41:28.866Z] [2024-11-20 10:41:22.140610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:8720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.135 [2024-11-20 10:41:22.140651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:8728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.135 [2024-11-20 10:41:22.140674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:8736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.135 [2024-11-20 10:41:22.140689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:8744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.135 [2024-11-20 10:41:22.140704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:8752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.135 [2024-11-20 10:41:22.140718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:8760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.135 [2024-11-20 10:41:22.140734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.135 [2024-11-20 10:41:22.140747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:8776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.135 [2024-11-20 10:41:22.140762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.135 [2024-11-20 10:41:22.140777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:8792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.135 [2024-11-20 10:41:22.140791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:8800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.135 [2024-11-20 10:41:22.140811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.135 [2024-11-20 10:41:22.140826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:8816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.135 [2024-11-20 10:41:22.140841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.135 [2024-11-20 10:41:22.140855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:8832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:48.135 [2024-11-20 10:41:22.140870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:8840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.135 [2024-11-20 10:41:22.140886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:8848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.135 [2024-11-20 10:41:22.140900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:8856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.135 [2024-11-20 10:41:22.140915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:8864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.135 [2024-11-20 10:41:22.140930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.135 [2024-11-20 10:41:22.140944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:8880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.135 [2024-11-20 10:41:22.140958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:8888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.135 [2024-11-20 10:41:22.140972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:8896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.135 [2024-11-20 10:41:22.140989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.140997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:8904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.135 [2024-11-20 10:41:22.141004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.141011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:8912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.135 [2024-11-20 10:41:22.141018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.141026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:8920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.135 [2024-11-20 10:41:22.141032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.141040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:8928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.135 [2024-11-20 10:41:22.141046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.135 [2024-11-20 10:41:22.141054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.135 [2024-11-20 10:41:22.141060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:8944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:8960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:8968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:8976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:8984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:8992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:9000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:9008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:9016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:9024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:9032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:9048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:9056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:9064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:9072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:9080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:9088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:9096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:9104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:9112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:9120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:9128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:9136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:9144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:9152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:9168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.136 [2024-11-20 10:41:22.141489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:9176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.136 [2024-11-20 10:41:22.141495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:9184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.137 [2024-11-20 10:41:22.141510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:9192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.137 [2024-11-20 10:41:22.141524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:9200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.137 [2024-11-20 10:41:22.141539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:9208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.137 [2024-11-20 10:41:22.141553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:9216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:48.137 [2024-11-20 10:41:22.141567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141589] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.141596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9224 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.141603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141612] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.137 [2024-11-20 10:41:22.141618] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.141623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9232 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.141630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141636] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.137 [2024-11-20 10:41:22.141641] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.141648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9240 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.141655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141662] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.137 [2024-11-20 10:41:22.141666] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.141672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9248 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.141678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141685] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.137 [2024-11-20 10:41:22.141690] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.141696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9256 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.141702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141708] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.137 [2024-11-20 10:41:22.141713] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.141718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9264 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.141724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141731] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.137 [2024-11-20 10:41:22.141737] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.141744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9272 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.141751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.137 [2024-11-20 10:41:22.141762] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.141767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9280 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.141773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141780] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.137 [2024-11-20 10:41:22.141785] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.141791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9288 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.141797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141803] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.137 [2024-11-20 10:41:22.141808] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.141813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9296 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.141820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141826] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.137 [2024-11-20 10:41:22.141831] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.141837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9304 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.141843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141849] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.137 [2024-11-20 10:41:22.141854] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.141860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.141866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141873] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.137 [2024-11-20 10:41:22.141878] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.141883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9320 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.141890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141896] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.137 [2024-11-20 10:41:22.141901] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.141906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9328 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.141912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141919] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.137 [2024-11-20 10:41:22.141925] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.141930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9336 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.141936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141943] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.137 [2024-11-20 10:41:22.141948] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.141953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9344 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.141960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141966] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.137 [2024-11-20 10:41:22.141970] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.141976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9352 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.141982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.141988] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.137 [2024-11-20 10:41:22.141996] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.142002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9360 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.142008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.142015] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.137 [2024-11-20 10:41:22.142020] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.142025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9368 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.142031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.142038] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.137 [2024-11-20 10:41:22.142042] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.137 [2024-11-20 10:41:22.142048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9376 len:8 PRP1 0x0 PRP2 0x0 00:23:48.137 [2024-11-20 10:41:22.142054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.137 [2024-11-20 10:41:22.142061] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142066] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9384 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142084] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142089] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9392 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142109] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142114] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9400 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142132] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142137] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9408 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142156] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142160] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9416 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142178] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142184] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9424 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142206] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142211] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9432 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142230] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142235] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142254] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142260] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9448 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142278] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142283] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9456 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142304] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142309] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9464 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142327] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142332] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9472 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142350] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142354] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9480 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142372] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142378] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9488 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142396] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142401] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9496 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142419] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142424] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9504 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142441] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142446] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9512 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142464] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142468] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9520 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142490] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9528 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142517] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9536 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142534] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142539] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9544 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142562] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9552 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142581] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142586] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.138 [2024-11-20 10:41:22.142591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9560 len:8 PRP1 0x0 PRP2 0x0 00:23:48.138 [2024-11-20 10:41:22.142597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.138 [2024-11-20 10:41:22.142604] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.138 [2024-11-20 10:41:22.142608] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.142613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9568 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.142620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.142626] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.142631] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.142637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9576 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.142643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.142650] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.153282] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.153292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9584 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.153299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.153307] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.153312] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.153317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9592 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.153323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.153329] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.153334] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.153340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9600 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.153346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.153352] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.153357] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.153362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9608 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.153368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.153375] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.153380] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.153386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9616 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.153392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.153398] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.153403] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.153408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9624 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.153414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.153420] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.153425] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.153430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9632 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.153436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.153443] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.153447] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.153453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9640 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.153460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.153467] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.153471] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.153477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9648 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.153483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.153489] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.153494] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.153499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9656 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.153505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.153512] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.153516] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.153521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9664 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.153527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.153533] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.153538] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.153544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9672 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.153550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.153557] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.153561] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.153567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9680 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.153573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.153579] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.153584] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.153589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9688 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.153595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.153602] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.153607] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.153612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9696 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.153618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.153624] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.153628] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.153636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9704 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.153642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.153649] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.153653] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.153658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9712 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.153664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.153671] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.153675] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.153680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9720 len:8 PRP1 0x0 PRP2 0x0 00:23:48.139 [2024-11-20 10:41:22.153687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.139 [2024-11-20 10:41:22.153694] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.139 [2024-11-20 10:41:22.153699] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.139 [2024-11-20 10:41:22.153704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9728 len:8 PRP1 0x0 PRP2 0x0 00:23:48.140 [2024-11-20 10:41:22.153710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.140 [2024-11-20 10:41:22.153716] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:48.140 [2024-11-20 10:41:22.153720] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:48.140 [2024-11-20 10:41:22.153726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9736 len:8 PRP1 0x0 PRP2 0x0 00:23:48.140 [2024-11-20 10:41:22.153732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.140 [2024-11-20 10:41:22.153773] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:48.140 [2024-11-20 10:41:22.153798] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.140 [2024-11-20 10:41:22.153806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.140 [2024-11-20 10:41:22.153814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.140 [2024-11-20 10:41:22.153820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.140 [2024-11-20 10:41:22.153827] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.140 [2024-11-20 10:41:22.153833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.140 [2024-11-20 10:41:22.153840] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:48.140 [2024-11-20 10:41:22.153847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:48.140 [2024-11-20 10:41:22.153854] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 5] in failed state. 00:23:48.140 [2024-11-20 10:41:22.153885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x2528340 (9): Bad file descriptor 00:23:48.140 [2024-11-20 10:41:22.156912] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 5] resetting controller 00:23:48.140 [2024-11-20 10:41:22.226285] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 6] Resetting controller successful. 00:23:48.140 10942.40 IOPS, 42.74 MiB/s [2024-11-20T09:41:28.871Z] 10994.18 IOPS, 42.95 MiB/s [2024-11-20T09:41:28.871Z] 11016.08 IOPS, 43.03 MiB/s [2024-11-20T09:41:28.871Z] 11051.15 IOPS, 43.17 MiB/s [2024-11-20T09:41:28.871Z] 11074.93 IOPS, 43.26 MiB/s 00:23:48.140 Latency(us) 00:23:48.140 [2024-11-20T09:41:28.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.140 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:48.140 Verification LBA range: start 0x0 length 0x4000 00:23:48.140 NVMe0n1 : 15.01 11097.02 43.35 1205.00 0.00 10383.69 440.81 21595.67 00:23:48.140 [2024-11-20T09:41:28.871Z] =================================================================================================================== 00:23:48.140 [2024-11-20T09:41:28.871Z] Total : 11097.02 43.35 1205.00 0.00 10383.69 440.81 21595.67 00:23:48.140 Received shutdown signal, test time was about 15.000000 seconds 00:23:48.140 00:23:48.140 Latency(us) 00:23:48.140 [2024-11-20T09:41:28.871Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:48.140 [2024-11-20T09:41:28.871Z] =================================================================================================================== 00:23:48.140 [2024-11-20T09:41:28.871Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:48.140 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:48.140 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:48.140 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:48.140 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=3322301 00:23:48.140 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:48.140 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 3322301 /var/tmp/bdevperf.sock 00:23:48.140 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@835 -- # '[' -z 3322301 ']' 00:23:48.140 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:48.140 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@840 -- # local max_retries=100 00:23:48.140 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:48.140 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:48.140 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@844 -- # xtrace_disable 00:23:48.140 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:48.140 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:23:48.140 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@868 -- # return 0 00:23:48.140 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:48.140 [2024-11-20 10:41:28.654402] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:48.140 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:48.140 [2024-11-20 10:41:28.846963] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:48.399 10:41:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:48.668 NVMe0n1 00:23:48.668 10:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:48.930 00:23:48.930 10:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x failover 00:23:49.497 00:23:49.497 10:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:49.497 10:41:29 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:49.497 10:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:49.777 10:41:30 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:53.128 10:41:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:53.128 10:41:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:53.128 10:41:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=3323061 00:23:53.128 10:41:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:53.128 10:41:33 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 3323061 00:23:54.062 { 00:23:54.062 "results": [ 00:23:54.062 { 00:23:54.062 "job": "NVMe0n1", 00:23:54.062 "core_mask": "0x1", 00:23:54.062 "workload": "verify", 00:23:54.062 "status": "finished", 00:23:54.062 "verify_range": { 00:23:54.062 "start": 0, 00:23:54.062 "length": 16384 00:23:54.062 }, 00:23:54.062 "queue_depth": 128, 00:23:54.062 "io_size": 4096, 00:23:54.062 "runtime": 1.007765, 00:23:54.062 "iops": 11308.191889974349, 00:23:54.062 "mibps": 44.1726245702123, 00:23:54.062 "io_failed": 0, 00:23:54.062 "io_timeout": 0, 00:23:54.062 "avg_latency_us": 11278.140096608668, 00:23:54.062 "min_latency_us": 2168.9295238095237, 00:23:54.062 "max_latency_us": 12545.462857142857 00:23:54.062 } 00:23:54.062 ], 00:23:54.062 "core_count": 1 00:23:54.062 } 00:23:54.062 10:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:54.062 [2024-11-20 10:41:28.278244] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:23:54.062 [2024-11-20 10:41:28.278302] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3322301 ] 00:23:54.062 [2024-11-20 10:41:28.353383] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:54.062 [2024-11-20 10:41:28.390572] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:23:54.062 [2024-11-20 10:41:30.350300] bdev_nvme.c:2052:bdev_nvme_failover_trid: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:54.062 [2024-11-20 10:41:30.350357] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.062 [2024-11-20 10:41:30.350369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.062 [2024-11-20 10:41:30.350379] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.062 [2024-11-20 10:41:30.350386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.062 [2024-11-20 10:41:30.350393] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.062 [2024-11-20 10:41:30.350400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.062 [2024-11-20 10:41:30.350407] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:54.062 [2024-11-20 10:41:30.350413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:54.062 [2024-11-20 10:41:30.350420] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 7] in failed state. 00:23:54.063 [2024-11-20 10:41:30.350450] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 7] resetting controller 00:23:54.063 [2024-11-20 10:41:30.350466] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a72340 (9): Bad file descriptor 00:23:54.063 [2024-11-20 10:41:30.360848] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 10] Resetting controller successful. 00:23:54.063 Running I/O for 1 seconds... 00:23:54.063 11267.00 IOPS, 44.01 MiB/s 00:23:54.063 Latency(us) 00:23:54.063 [2024-11-20T09:41:34.794Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:54.063 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:54.063 Verification LBA range: start 0x0 length 0x4000 00:23:54.063 NVMe0n1 : 1.01 11308.19 44.17 0.00 0.00 11278.14 2168.93 12545.46 00:23:54.063 [2024-11-20T09:41:34.794Z] =================================================================================================================== 00:23:54.063 [2024-11-20T09:41:34.794Z] Total : 11308.19 44.17 0.00 0.00 11278.14 2168.93 12545.46 00:23:54.063 10:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:54.063 10:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:54.321 10:41:34 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:54.579 10:41:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:54.579 10:41:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:54.837 10:41:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:54.837 10:41:35 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:58.122 10:41:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:58.122 10:41:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:58.122 10:41:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 3322301 00:23:58.122 10:41:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3322301 ']' 00:23:58.122 10:41:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3322301 00:23:58.122 10:41:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:58.122 10:41:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.122 10:41:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3322301 00:23:58.122 10:41:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:23:58.122 10:41:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:23:58.122 10:41:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3322301' 00:23:58.122 killing process with pid 3322301 00:23:58.122 10:41:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3322301 00:23:58.122 10:41:38 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3322301 00:23:58.380 10:41:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:58.380 10:41:38 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # nvmfcleanup 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@99 -- # sync 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@102 -- # set +e 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@103 -- # for i in {1..20} 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:23:58.638 rmmod nvme_tcp 00:23:58.638 rmmod nvme_fabrics 00:23:58.638 rmmod nvme_keyring 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@106 -- # set -e 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@107 -- # return 0 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@336 -- # '[' -n 3319310 ']' 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@337 -- # killprocess 3319310 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # '[' -z 3319310 ']' 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # kill -0 3319310 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # uname 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3319310 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3319310' 00:23:58.638 killing process with pid 3319310 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@973 -- # kill 3319310 00:23:58.638 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@978 -- # wait 3319310 00:23:58.896 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:23:58.896 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # nvmf_fini 00:23:58.896 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@264 -- # local dev 00:23:58.896 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@267 -- # remove_target_ns 00:23:58.896 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:23:58.896 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:23:58.896 10:41:39 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@268 -- # delete_main_bridge 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@130 -- # return 0 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # _dev=0 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@41 -- # dev_map=() 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/setup.sh@284 -- # iptr 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@542 -- # iptables-save 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:24:00.800 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@542 -- # iptables-restore 00:24:01.059 00:24:01.059 real 0m37.550s 00:24:01.059 user 1m58.525s 00:24:01.059 sys 0m7.958s 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:24:01.059 ************************************ 00:24:01.059 END TEST nvmf_failover 00:24:01.059 ************************************ 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:01.059 ************************************ 00:24:01.059 START TEST nvmf_host_multipath_status 00:24:01.059 ************************************ 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:01.059 * Looking for test storage... 00:24:01.059 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lcov --version 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # IFS=.-: 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@336 -- # read -ra ver1 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # IFS=.-: 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@337 -- # read -ra ver2 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@338 -- # local 'op=<' 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@340 -- # ver1_l=2 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@341 -- # ver2_l=1 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@344 -- # case "$op" in 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@345 -- # : 1 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # decimal 1 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=1 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 1 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@365 -- # ver1[v]=1 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # decimal 2 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@353 -- # local d=2 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@355 -- # echo 2 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@366 -- # ver2[v]=2 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@368 -- # return 0 00:24:01.059 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:01.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.317 --rc genhtml_branch_coverage=1 00:24:01.317 --rc genhtml_function_coverage=1 00:24:01.317 --rc genhtml_legend=1 00:24:01.317 --rc geninfo_all_blocks=1 00:24:01.317 --rc geninfo_unexecuted_blocks=1 00:24:01.317 00:24:01.317 ' 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:01.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.317 --rc genhtml_branch_coverage=1 00:24:01.317 --rc genhtml_function_coverage=1 00:24:01.317 --rc genhtml_legend=1 00:24:01.317 --rc geninfo_all_blocks=1 00:24:01.317 --rc geninfo_unexecuted_blocks=1 00:24:01.317 00:24:01.317 ' 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:01.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.317 --rc genhtml_branch_coverage=1 00:24:01.317 --rc genhtml_function_coverage=1 00:24:01.317 --rc genhtml_legend=1 00:24:01.317 --rc geninfo_all_blocks=1 00:24:01.317 --rc geninfo_unexecuted_blocks=1 00:24:01.317 00:24:01.317 ' 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:01.317 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:01.317 --rc genhtml_branch_coverage=1 00:24:01.317 --rc genhtml_function_coverage=1 00:24:01.317 --rc genhtml_legend=1 00:24:01.317 --rc geninfo_all_blocks=1 00:24:01.317 --rc geninfo_unexecuted_blocks=1 00:24:01.317 00:24:01.317 ' 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:01.317 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@15 -- # shopt -s extglob 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@50 -- # : 0 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:01.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # remove_target_ns 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # xtrace_disable 00:24:01.318 10:41:41 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@131 -- # pci_devs=() 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@135 -- # net_devs=() 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@136 -- # e810=() 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@136 -- # local -ga e810 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@137 -- # x722=() 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@137 -- # local -ga x722 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@138 -- # mlx=() 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@138 -- # local -ga mlx 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:07.887 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:07.887 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:07.887 Found net devices under 0000:86:00.0: cvl_0_0 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:07.887 Found net devices under 0000:86:00.1: cvl_0_1 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:07.887 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@262 -- # is_hw=yes 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@257 -- # create_target_ns 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@28 -- # local -g _dev 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # ips=() 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772161 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:07.888 10.0.0.1 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@11 -- # local val=167772162 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:07.888 10.0.0.2 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:07.888 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:07.889 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:07.889 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.492 ms 00:24:07.889 00:24:07.889 --- 10.0.0.1 ping statistics --- 00:24:07.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.889 rtt min/avg/max/mdev = 0.492/0.492/0.492/0.000 ms 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target0 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:24:07.889 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:07.889 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:24:07.889 00:24:07.889 --- 10.0.0.2 ping statistics --- 00:24:07.889 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:07.889 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair++ )) 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # return 0 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=initiator1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # return 1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev= 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@169 -- # return 0 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target0 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # get_net_dev target1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@107 -- # local dev=target1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@109 -- # return 1 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@168 -- # dev= 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@169 -- # return 0 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:24:07.889 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:07.890 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:07.890 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:07.890 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:07.890 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:07.890 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:07.890 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:07.890 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:24:07.890 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@726 -- # xtrace_disable 00:24:07.890 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:07.890 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@328 -- # nvmfpid=3327503 00:24:07.890 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:07.890 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # waitforlisten 3327503 00:24:07.890 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3327503 ']' 00:24:07.890 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:07.890 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:07.890 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:07.890 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:07.890 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:07.890 10:41:47 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:07.890 [2024-11-20 10:41:47.912845] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:24:07.890 [2024-11-20 10:41:47.912896] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:07.890 [2024-11-20 10:41:47.993723] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:07.890 [2024-11-20 10:41:48.034428] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:07.890 [2024-11-20 10:41:48.034466] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:07.890 [2024-11-20 10:41:48.034473] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:07.890 [2024-11-20 10:41:48.034479] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:07.890 [2024-11-20 10:41:48.034487] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:07.890 [2024-11-20 10:41:48.035677] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:24:07.890 [2024-11-20 10:41:48.035679] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:24:07.890 10:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:07.890 10:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:07.890 10:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:24:07.890 10:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@732 -- # xtrace_disable 00:24:07.890 10:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:07.890 10:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:07.890 10:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=3327503 00:24:07.890 10:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:07.890 [2024-11-20 10:41:48.338615] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:07.890 10:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:07.890 Malloc0 00:24:07.890 10:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:08.148 10:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:08.405 10:41:48 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:08.405 [2024-11-20 10:41:49.120306] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:08.733 10:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:08.733 [2024-11-20 10:41:49.308762] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:08.733 10:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=3327779 00:24:08.733 10:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:08.733 10:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:08.733 10:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 3327779 /var/tmp/bdevperf.sock 00:24:08.733 10:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@835 -- # '[' -z 3327779 ']' 00:24:08.733 10:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:08.733 10:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@840 -- # local max_retries=100 00:24:08.733 10:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:08.733 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:08.733 10:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@844 -- # xtrace_disable 00:24:08.733 10:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:08.990 10:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:24:08.990 10:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@868 -- # return 0 00:24:08.991 10:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:09.248 10:41:49 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:09.812 Nvme0n1 00:24:09.813 10:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:10.070 Nvme0n1 00:24:10.070 10:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:10.070 10:41:50 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:11.968 10:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:11.968 10:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:12.227 10:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:12.484 10:41:52 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:13.417 10:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:13.417 10:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:13.417 10:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.417 10:41:53 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:13.675 10:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.675 10:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:13.675 10:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:13.675 10:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.933 10:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:13.933 10:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:13.933 10:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:13.933 10:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:13.933 10:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:13.933 10:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:13.933 10:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:13.933 10:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.191 10:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.191 10:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:14.191 10:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.191 10:41:54 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:14.450 10:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.450 10:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:14.450 10:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:14.450 10:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:14.707 10:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:14.707 10:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:14.707 10:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:14.707 10:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:14.965 10:41:55 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:16.337 10:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:16.337 10:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:16.337 10:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.337 10:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:16.337 10:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:16.337 10:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:16.337 10:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:16.337 10:41:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.337 10:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.337 10:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:16.337 10:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.337 10:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:16.595 10:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.595 10:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:16.595 10:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.595 10:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:16.852 10:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:16.852 10:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:16.852 10:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.852 10:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:17.110 10:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.110 10:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:17.110 10:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.110 10:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:17.368 10:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.368 10:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:17.368 10:41:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:17.368 10:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:17.626 10:41:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:18.999 10:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:18.999 10:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:18.999 10:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:18.999 10:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:18.999 10:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:18.999 10:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:18.999 10:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.000 10:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:19.000 10:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:19.000 10:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:19.000 10:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.000 10:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:19.257 10:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.257 10:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:19.258 10:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.258 10:41:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:19.516 10:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.516 10:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:19.516 10:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.516 10:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:19.774 10:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.774 10:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:19.774 10:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.774 10:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:20.031 10:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.031 10:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:20.032 10:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:20.032 10:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:20.289 10:42:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:21.661 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:21.661 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:21.661 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.661 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:21.661 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.661 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:21.661 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.661 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:21.919 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:21.919 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:21.919 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.919 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:21.919 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.919 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:21.919 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.919 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:22.176 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.177 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:22.177 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:22.177 10:42:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.450 10:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.450 10:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:22.450 10:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.450 10:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:22.718 10:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:22.718 10:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:22.718 10:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:22.976 10:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:22.976 10:42:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:23.972 10:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:23.972 10:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:23.972 10:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:23.972 10:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:24.229 10:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:24.229 10:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:24.229 10:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.229 10:42:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:24.487 10:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:24.487 10:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:24.487 10:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.487 10:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:24.744 10:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.744 10:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:24.744 10:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.744 10:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:24.744 10:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.744 10:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:24.744 10:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.744 10:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:25.002 10:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:25.002 10:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:25.002 10:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.002 10:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:25.259 10:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:25.259 10:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:25.259 10:42:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:25.517 10:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:25.517 10:42:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:26.898 10:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:26.898 10:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:26.898 10:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.898 10:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:26.899 10:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:26.899 10:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:26.899 10:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.899 10:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:26.899 10:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:26.899 10:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:26.899 10:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.899 10:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:27.156 10:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.156 10:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:27.156 10:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.156 10:42:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:27.413 10:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.413 10:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:27.414 10:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.414 10:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:27.671 10:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.671 10:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:27.671 10:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.671 10:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:27.930 10:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.930 10:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:27.930 10:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:27.930 10:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:28.189 10:42:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:28.448 10:42:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:29.383 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:29.384 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:29.384 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.384 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:29.643 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.643 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:29.643 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.643 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:29.902 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.902 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:29.902 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.902 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:30.161 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.161 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:30.161 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.161 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:30.161 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.161 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:30.161 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.161 10:42:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:30.420 10:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.420 10:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:30.420 10:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:30.420 10:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.677 10:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.677 10:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:30.677 10:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:30.936 10:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:31.196 10:42:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:32.130 10:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:32.130 10:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:32.130 10:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.130 10:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:32.389 10:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:32.389 10:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:32.389 10:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.389 10:42:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:32.648 10:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.648 10:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:32.648 10:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.648 10:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:32.648 10:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.648 10:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:32.648 10:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.648 10:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:32.906 10:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.906 10:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:32.906 10:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.906 10:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:33.165 10:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.165 10:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:33.165 10:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.165 10:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:33.423 10:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.423 10:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:33.423 10:42:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:33.423 10:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:33.682 10:42:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:34.618 10:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:34.618 10:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:34.876 10:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.876 10:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:34.876 10:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:34.876 10:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:34.876 10:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.876 10:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:35.135 10:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.135 10:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:35.135 10:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.135 10:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:35.393 10:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.394 10:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:35.394 10:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.394 10:42:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:35.652 10:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.652 10:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:35.652 10:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.652 10:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:35.911 10:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.911 10:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:35.911 10:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:35.911 10:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:36.169 10:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:36.169 10:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:36.169 10:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:36.169 10:42:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:36.428 10:42:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:37.364 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:37.364 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:37.364 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.364 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:37.622 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.622 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:37.622 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.622 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:37.882 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:37.882 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:37.882 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.882 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:38.141 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.141 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:38.141 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.141 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:38.400 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.400 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:38.400 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.400 10:42:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:38.666 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.666 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:38.666 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:38.666 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.666 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:38.666 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 3327779 00:24:38.666 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3327779 ']' 00:24:38.666 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3327779 00:24:38.666 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:38.666 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:38.666 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3327779 00:24:38.666 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:24:38.666 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:24:38.666 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3327779' 00:24:38.666 killing process with pid 3327779 00:24:38.666 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3327779 00:24:38.666 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3327779 00:24:38.952 { 00:24:38.952 "results": [ 00:24:38.952 { 00:24:38.952 "job": "Nvme0n1", 00:24:38.952 "core_mask": "0x4", 00:24:38.952 "workload": "verify", 00:24:38.952 "status": "terminated", 00:24:38.952 "verify_range": { 00:24:38.952 "start": 0, 00:24:38.952 "length": 16384 00:24:38.952 }, 00:24:38.952 "queue_depth": 128, 00:24:38.952 "io_size": 4096, 00:24:38.952 "runtime": 28.646851, 00:24:38.952 "iops": 10607.064629895971, 00:24:38.952 "mibps": 41.43384621053114, 00:24:38.952 "io_failed": 0, 00:24:38.952 "io_timeout": 0, 00:24:38.952 "avg_latency_us": 12045.001811096909, 00:24:38.952 "min_latency_us": 265.2647619047619, 00:24:38.952 "max_latency_us": 3083812.083809524 00:24:38.952 } 00:24:38.952 ], 00:24:38.952 "core_count": 1 00:24:38.952 } 00:24:38.952 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 3327779 00:24:38.952 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:38.952 [2024-11-20 10:41:49.380700] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:24:38.952 [2024-11-20 10:41:49.380751] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3327779 ] 00:24:38.952 [2024-11-20 10:41:49.454542] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:38.952 [2024-11-20 10:41:49.496650] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:24:38.952 Running I/O for 90 seconds... 00:24:38.952 11518.00 IOPS, 44.99 MiB/s [2024-11-20T09:42:19.683Z] 11483.00 IOPS, 44.86 MiB/s [2024-11-20T09:42:19.683Z] 11447.33 IOPS, 44.72 MiB/s [2024-11-20T09:42:19.683Z] 11449.50 IOPS, 44.72 MiB/s [2024-11-20T09:42:19.683Z] 11464.00 IOPS, 44.78 MiB/s [2024-11-20T09:42:19.683Z] 11443.33 IOPS, 44.70 MiB/s [2024-11-20T09:42:19.683Z] 11433.43 IOPS, 44.66 MiB/s [2024-11-20T09:42:19.683Z] 11443.50 IOPS, 44.70 MiB/s [2024-11-20T09:42:19.683Z] 11443.89 IOPS, 44.70 MiB/s [2024-11-20T09:42:19.683Z] 11425.60 IOPS, 44.63 MiB/s [2024-11-20T09:42:19.683Z] 11435.36 IOPS, 44.67 MiB/s [2024-11-20T09:42:19.683Z] 11422.33 IOPS, 44.62 MiB/s [2024-11-20T09:42:19.683Z] [2024-11-20 10:42:03.432140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.952 [2024-11-20 10:42:03.432176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:38.952 [2024-11-20 10:42:03.432196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.952 [2024-11-20 10:42:03.432218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:38.952 [2024-11-20 10:42:03.432231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.952 [2024-11-20 10:42:03.432238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:38.952 [2024-11-20 10:42:03.432250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.952 [2024-11-20 10:42:03.432257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:38.952 [2024-11-20 10:42:03.432269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.952 [2024-11-20 10:42:03.432277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:38.952 [2024-11-20 10:42:03.432289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.952 [2024-11-20 10:42:03.432296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:38.952 [2024-11-20 10:42:03.432309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.952 [2024-11-20 10:42:03.432316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:38.952 [2024-11-20 10:42:03.432329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.952 [2024-11-20 10:42:03.432335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:38.952 [2024-11-20 10:42:03.432347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.952 [2024-11-20 10:42:03.432354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:38.952 [2024-11-20 10:42:03.432366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.952 [2024-11-20 10:42:03.432379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:38.952 [2024-11-20 10:42:03.432392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.952 [2024-11-20 10:42:03.432399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:38.952 [2024-11-20 10:42:03.432412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.952 [2024-11-20 10:42:03.432420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:38.952 [2024-11-20 10:42:03.432434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.952 [2024-11-20 10:42:03.432441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.432847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.432855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.433398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.433413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.433429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.433436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.433449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.433456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.433468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.433476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.433488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.433495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.433507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.433513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.433525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.433532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.433544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.433551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.433563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.433569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.433581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.433589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.433601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.433607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.433619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.433626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.433642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.433649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.433662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.433668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.433680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.433687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.433700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.433707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:38.953 [2024-11-20 10:42:03.433720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.953 [2024-11-20 10:42:03.433726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.433738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.433745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.433757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.433764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.433776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.433782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.433794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.433801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.433814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.433821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.433832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.433839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.433851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.433858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.433870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.433881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.433893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.433900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.433914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.433922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.433934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.433941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.433953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.433960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.433973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.433980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.433992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.433998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.954 [2024-11-20 10:42:03.434408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.954 [2024-11-20 10:42:03.434428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.954 [2024-11-20 10:42:03.434447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:38.954 [2024-11-20 10:42:03.434460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.954 [2024-11-20 10:42:03.434466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.434929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.955 [2024-11-20 10:42:03.434942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.434956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.434964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.434976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.434983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.434995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.955 [2024-11-20 10:42:03.435689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.955 [2024-11-20 10:42:03.435701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.435708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.435720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.956 [2024-11-20 10:42:03.435727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.435739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.435747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.435760] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.435766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.435778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.435786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.435800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.435807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.435819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.435826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.435838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.435846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.435859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.435866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.435878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.435885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.435897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.435904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436534] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.956 [2024-11-20 10:42:03.436777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.956 [2024-11-20 10:42:03.436784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.436798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.436804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.436816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.436824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.436836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.436843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.436855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.436861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.436873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.436880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.436892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.436899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.436911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.436917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.436929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.436938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.436950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.436957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.436968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.436975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.436988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.436995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.437007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.437014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.437026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.437034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.437046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.437053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.437065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.437072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.437083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.447231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.447255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.447265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.447282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.447291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.447308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.447318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.447334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.447347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.447364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.447373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.447389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.447399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.447415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.447425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.447442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.447451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.447468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.447478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.448080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.448098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.448119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.448129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.448146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.448156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.448172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.448184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.448209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.448219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.448235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.448245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.448262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.448275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.448293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.448303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.448319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.448329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.448345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.448355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.448372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.448381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.448397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.448407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.448424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.957 [2024-11-20 10:42:03.448433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:38.957 [2024-11-20 10:42:03.448450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.448461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.448487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.448513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.448540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.448565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.448592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.448620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.448645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.448671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.448697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.448724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.448750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.448776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.448802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.448829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.958 [2024-11-20 10:42:03.448854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.958 [2024-11-20 10:42:03.448880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.958 [2024-11-20 10:42:03.448906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.958 [2024-11-20 10:42:03.448934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.448960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.448977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.448987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.449004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.449014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.449030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.449040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.449056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.449065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.449082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.449092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.449108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.449117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.449134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.449144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.449160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.449170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.449186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.449195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.449218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.449228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.449244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.449256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.449274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.449283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.449299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.449309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.449326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.449334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.449351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.449361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.449377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.449387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.449403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.449412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.449428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.449440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.449457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.958 [2024-11-20 10:42:03.449466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:38.958 [2024-11-20 10:42:03.449483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.449492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.449509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.449518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.449535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.449545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.449563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.449578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.449595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.449605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.449622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.449631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.449648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.449658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.449674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.449684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.449701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.449710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.449726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.449736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.449754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.449763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.449781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.449790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.449807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.449817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.449834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.449843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.449861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.449870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.449886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.449896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.449916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.449924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.449942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.449952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.449969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.959 [2024-11-20 10:42:03.449978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.449995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.450005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.450021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.450031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.450048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.450058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.450074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.450084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.450100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.450109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.450126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.450136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.450152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.450162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.450179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.450188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.451067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.451086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.451109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.451120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.451137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.451146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.451162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.451173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.451189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.451199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.451223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.451233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.451250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.451261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.451278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.451290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.451307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.451317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.451333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.451343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.451360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.451369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:38.959 [2024-11-20 10:42:03.451387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.959 [2024-11-20 10:42:03.451397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451533] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.451976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.451985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.452001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.452011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.452028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.452036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.452053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.452063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.452079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.452091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.452108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.452116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.452135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.452145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.452163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.452174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.452190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.452199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.452220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.452230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.452246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.452255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.452274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.452284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.452300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.452310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.452326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.452335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.452352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.452362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.452924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.452941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.452959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.960 [2024-11-20 10:42:03.452970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:38.960 [2024-11-20 10:42:03.452990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.961 [2024-11-20 10:42:03.453731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.961 [2024-11-20 10:42:03.453757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.961 [2024-11-20 10:42:03.453784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.961 [2024-11-20 10:42:03.453810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.961 [2024-11-20 10:42:03.453890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:38.961 [2024-11-20 10:42:03.453906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.453916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.453932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.453942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.453958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.453970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.453987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.453997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.962 [2024-11-20 10:42:03.454851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.454868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.454878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.460610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.460623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:38.962 [2024-11-20 10:42:03.460638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.962 [2024-11-20 10:42:03.460647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.460666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.460673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.461988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.461997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.462011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.462020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.462035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.462045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.462060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.462070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.462084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.462094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.462108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.462117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.462133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.462141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.462157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.462166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.462181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.462190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.462210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.462219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.462234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.462243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.462258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.462267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.462283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.462291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.462307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.462315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.462330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.462339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.462357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.462367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:38.963 [2024-11-20 10:42:03.462381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.963 [2024-11-20 10:42:03.462390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.462405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.462415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.462430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.462439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.462454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.462463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.462479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.462488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.462503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.462512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.462527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.462536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.462551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.462560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.462575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.462584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.462599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.462607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.462622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.462630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.462647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.462656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.462670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.462679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.462694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.462703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.964 [2024-11-20 10:42:03.463846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:38.964 [2024-11-20 10:42:03.463863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.463873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.463889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.463898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.463912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.463921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.463937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.463946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.463962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.463972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.463987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.965 [2024-11-20 10:42:03.463998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.965 [2024-11-20 10:42:03.464022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.965 [2024-11-20 10:42:03.464048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.965 [2024-11-20 10:42:03.464075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464681] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:38.965 [2024-11-20 10:42:03.464731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.965 [2024-11-20 10:42:03.464740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.464755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.464764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.464780] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.464789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.464804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.464813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.464828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.464837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.464853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.464862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.464878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.464887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.464901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.464911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.464925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.464935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.464950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.464960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.464975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.464985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.465000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.465009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.465024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.966 [2024-11-20 10:42:03.465033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.465049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.465058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.465073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.465082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.465098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.465107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.465871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.465888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.465905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.465914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.465929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.465939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.465956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.465966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.465981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.465990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.466005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.466014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.466030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.466042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.466057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.466067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.466082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.466091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.466107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.466116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.466131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.466140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.466155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.466164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.466179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.466189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.466211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.466221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.466237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.466245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.466278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.466290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.466310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.466321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.466341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.466353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.466372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.466386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.466406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.466418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.466438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.466449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.466470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.466482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.466501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.466513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:38.966 [2024-11-20 10:42:03.466533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.966 [2024-11-20 10:42:03.466544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.466564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.466576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.466596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.466607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.466628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.466640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.466660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.466672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.466693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.466704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.466724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.466736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.466756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.466768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.466790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.466803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.466823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.466834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.466854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.466866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.466886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.466897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.466917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.466929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.466949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.466960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.466981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.466993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.467013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.467024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.467044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.467055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.467076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.467087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.467108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.467119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.467139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.467150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.467172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.467184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.467209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.467221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.467240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.467252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.467272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.467284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.467303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.467315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.467335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.467347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.467367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.467378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.467399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.467411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.468091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.468109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.468131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.468143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.468165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.468177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.468197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.468216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.468236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.468252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.468272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.468284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.468305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.468316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.468336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.468348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.468368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.468380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.468400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.468412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.468432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.468445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:38.967 [2024-11-20 10:42:03.468465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.967 [2024-11-20 10:42:03.468477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.468498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.468509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.468529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.468540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.468560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.468572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.468592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.468603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.468623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.468638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.468659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.468671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.468691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.468703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.468723] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.468734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.468755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.468767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.468787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.468799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.468819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.468830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.468851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.468863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.468883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.468895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.468916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.468928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.468950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.468962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.468982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.468993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.469025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.469059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.469091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.968 [2024-11-20 10:42:03.469123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.968 [2024-11-20 10:42:03.469155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.968 [2024-11-20 10:42:03.469187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.968 [2024-11-20 10:42:03.469224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.469257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.469289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.469320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.469352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.469383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.469415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.469452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.469484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.469516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.469549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.469580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.469611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.469643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.469674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.968 [2024-11-20 10:42:03.469706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:38.968 [2024-11-20 10:42:03.469727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.469739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.469759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.469771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.469791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.469803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.469825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.469836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.469857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.469869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.469889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.469901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.469920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.469932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.469952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.469963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.469984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.469995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.470015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.470026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.470045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.470057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.470077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.470088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.470108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.470119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.470139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.470150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.470170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.470182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.470206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.470220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.470241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.470252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.470272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.470284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.470303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.470315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.470335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.470347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.470368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.470379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.470399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.470410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.470430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.470442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.470461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.969 [2024-11-20 10:42:03.470472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.470493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.470505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.470525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.470536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.471543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.471564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.471587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.471616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.471637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.471649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.471669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.471683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.471703] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.471714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.471736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.471747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.471768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.471780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.471800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.471812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.471833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.471845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.471867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.471878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.471898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.471910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.471930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.471943] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.471965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.969 [2024-11-20 10:42:03.471976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:38.969 [2024-11-20 10:42:03.471996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472298] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.472987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.472999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.473019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.473030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.473050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.473061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.473081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.970 [2024-11-20 10:42:03.473093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:38.970 [2024-11-20 10:42:03.473112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.473124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.473147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.473158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.473178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.473190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.473215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.473230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.473928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.473948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.473970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.473983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.474968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.971 [2024-11-20 10:42:03.474981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.475002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.971 [2024-11-20 10:42:03.475013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.475034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.971 [2024-11-20 10:42:03.475046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:38.971 [2024-11-20 10:42:03.475067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.972 [2024-11-20 10:42:03.475079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.972 [2024-11-20 10:42:03.475111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475966] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.475977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.475996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.476010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.476030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.476041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.476062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.476074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.476106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.476116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.476130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.476138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.476152] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.476160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.476175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.476184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.476197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.476211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.476225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.476234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.476248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.972 [2024-11-20 10:42:03.476257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:38.972 [2024-11-20 10:42:03.476271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.476280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.476296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.476304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.476319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.973 [2024-11-20 10:42:03.476327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.476340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.476348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.973 [2024-11-20 10:42:03.477794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:38.973 [2024-11-20 10:42:03.477809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.477818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.477832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.477840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.477853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.477861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.477875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.477884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.477897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.477905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.477918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.477927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.477941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.477950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.477963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.477972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.477985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.477994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.478976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.478986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.479000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.974 [2024-11-20 10:42:03.479008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:38.974 [2024-11-20 10:42:03.479023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479045] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.975 [2024-11-20 10:42:03.479391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.975 [2024-11-20 10:42:03.479413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.975 [2024-11-20 10:42:03.479436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.975 [2024-11-20 10:42:03.479457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:38.975 [2024-11-20 10:42:03.479866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.975 [2024-11-20 10:42:03.479874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.479890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.479898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.479912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.479920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.479933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.479941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.976 [2024-11-20 10:42:03.480845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480923] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.480990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.480998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.481012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.481020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.481034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.481042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.481055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.481063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.481077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.481085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.481099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.481106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.481120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.481128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.481141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.481150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.481165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.481173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.481187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.481195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.481213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.481223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:38.976 [2024-11-20 10:42:03.481237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.976 [2024-11-20 10:42:03.481245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481564] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.481713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.481721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.482237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.482252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.482270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.482279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.482293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.482302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.482315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.482323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.482336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.482345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.482358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.482366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.482381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.482389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.482403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.482410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.482426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.482434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.482447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.482455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.482469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.482476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.482490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.482499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.482512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.482520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.482534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.482542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.482557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.482566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:38.977 [2024-11-20 10:42:03.482580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.977 [2024-11-20 10:42:03.482589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.482602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.482610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.482622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.482631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.482644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.482652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.482665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.482675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.482688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.482697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.482709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.482718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.482731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.482739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.482753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.482762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.482775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.482783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.482796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.482806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.482819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.482827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.482841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.482850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.482863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.482871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.482884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.482892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.482905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.482913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.482927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.482935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.482948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.482956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.482970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.482978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.482990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.482998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.483021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.483042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.483068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.483092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483105] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.483113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.483135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.483157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.483178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.483200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.483225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.978 [2024-11-20 10:42:03.483247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.978 [2024-11-20 10:42:03.483268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.978 [2024-11-20 10:42:03.483290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.978 [2024-11-20 10:42:03.483311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.483334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.483358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.483379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.483400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.483422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.978 [2024-11-20 10:42:03.483436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.978 [2024-11-20 10:42:03.483444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.483457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.483465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.483479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.483486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.483500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.483508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.483522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.483530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.483543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.483551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.979 [2024-11-20 10:42:03.484713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.979 [2024-11-20 10:42:03.484734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:38.979 [2024-11-20 10:42:03.484747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.484755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.484769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.484777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.484791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.484799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.484812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.484820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.484833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.484842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.484855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.484863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.484876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.484885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.484898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.484906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.484919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.484927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.484940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.484950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.484964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.484973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.484986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.484994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.485983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.485992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.486008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.486017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.486030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.486039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.486053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.486061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.486075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.486084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:38.980 [2024-11-20 10:42:03.486098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.980 [2024-11-20 10:42:03.486106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.981 [2024-11-20 10:42:03.486925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:38.981 [2024-11-20 10:42:03.486940] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.486948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.486961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.486968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.486981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.486989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.487009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.487030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.487050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.487073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.982 [2024-11-20 10:42:03.487094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.982 [2024-11-20 10:42:03.487114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.982 [2024-11-20 10:42:03.487135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.982 [2024-11-20 10:42:03.487156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.487688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.487712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.487732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.487752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.487773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.487794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.487814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.487838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.487858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.487878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.487899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.487919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.487939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.487959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.487979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.487992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.488000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.488012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.488020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.488032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.488039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.488051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.488058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.488070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.488080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.488093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.488100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.488113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.488121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.488134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.488141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.488154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.488163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.488175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.488182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.488194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.488208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.488222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.488230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:38.982 [2024-11-20 10:42:03.488243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.982 [2024-11-20 10:42:03.488251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.488264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.488272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.488284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.488292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.488305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.488312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.488324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.488334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.488347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.488354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.488367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.488375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.488388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.488396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.488408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.488415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.488428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.488435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.488448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.488456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.488469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.983 [2024-11-20 10:42:03.488476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.488489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.488497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.488510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.488516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.488871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.488884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.488900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.488908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.488921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.488934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.488947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.488954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.488967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.488975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.488988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.488996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.489009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.489017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.489029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.489037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.489049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.489057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.489070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.489078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.489091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.489098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.489112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.489120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.489134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.489142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.489154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.489162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.489175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.489182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.489197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.489209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.489223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.489231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.489244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.489251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.489265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.489272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.489286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.489294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.489306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.489325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:38.983 [2024-11-20 10:42:03.489338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.983 [2024-11-20 10:42:03.489345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.489982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.489994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.490002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.490015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.490023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.490036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.490044] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.490058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.490065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.490077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.490086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.490100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.984 [2024-11-20 10:42:03.490108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:38.984 [2024-11-20 10:42:03.490121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.490128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.490141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.490150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.490163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.490170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.490183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.490190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.490209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.490217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.490755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.490768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.490784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.490792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.490806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.490813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.490826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.490834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.490848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.490857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.490870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.490877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.490890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.490898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.490912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.490919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.490931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.490939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.490952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.490960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.490972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.490979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.490993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.490999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.491025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.491045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.491065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491080] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.491088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.491109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.491129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.491149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.985 [2024-11-20 10:42:03.491171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.985 [2024-11-20 10:42:03.491191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.985 [2024-11-20 10:42:03.491217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.985 [2024-11-20 10:42:03.491238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.491259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.491282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.491302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.491323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.491345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.491365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.491386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.491406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.491426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.491446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:38.985 [2024-11-20 10:42:03.491459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.985 [2024-11-20 10:42:03.491467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.491480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.491487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.491500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.491509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.491521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.491530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.491543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.491551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.491563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.491571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.491584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.491591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.491605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.491613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.491626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.491634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.491646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.491654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.491666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.491674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.491688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.491695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.491708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.491716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.491729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.491737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.491750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.491757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.491769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.491778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.491791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.491799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.491811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.491819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.491832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.491839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.492303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.492325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.492346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.492368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.492388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.492410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.492430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.492451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492464] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.492471] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.986 [2024-11-20 10:42:03.492495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.492515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.492535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.492555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.492575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.492596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.492617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.492637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.492658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.492679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.986 [2024-11-20 10:42:03.492699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:38.986 [2024-11-20 10:42:03.492713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.492721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.492735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.492743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.492755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.492763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.492776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.492784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.492798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.492805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.492819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.492826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.492839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.492846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.492859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.492867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.492880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.492888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.492901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.492908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.492922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.492929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.492942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.492950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.492963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.492971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.492984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.492994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:114280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:114288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:114296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:114304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:114312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:114320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:114328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:114336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:114344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:114352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.987 [2024-11-20 10:42:03.493934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:38.987 [2024-11-20 10:42:03.493947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:114360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.493955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.493967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:114368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.493976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.493988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:114376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.493995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:114384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:114392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:114400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:114408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:114416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:114424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:114432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:114440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:114448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:114456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:114464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:114472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:114480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:114488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:114496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:114504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:114512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:114520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:114528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:114536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:114544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:114552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:114560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:114568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:114576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:114584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:114600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:114608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:114616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:114624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.988 [2024-11-20 10:42:03.494794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:38.988 [2024-11-20 10:42:03.494809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:114632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.494817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.494831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:114640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.494850] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.494865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:113936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.989 [2024-11-20 10:42:03.494872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.494887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:113944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.989 [2024-11-20 10:42:03.494895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.494910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:113952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.989 [2024-11-20 10:42:03.494918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.494934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:113960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.989 [2024-11-20 10:42:03.494942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.494957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:114648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.494965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.494980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:114656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.494987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:114664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:114672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:114680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:114688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:114696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:114704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:114712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:114720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:114728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:114736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:114744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:114752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:114760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:114768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:114776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:114784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:114792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:114800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:114808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:114816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:114824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:114832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:114840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:114848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:114856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:114864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:114872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:114880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:114888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:114896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.989 [2024-11-20 10:42:03.495767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:38.989 [2024-11-20 10:42:03.495786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:114904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.495794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.495810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:114912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.495818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.495835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:114920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.495842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.495859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:114928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.495866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.495882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:114936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.495890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.495907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:114944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.495915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.495931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:113928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.990 [2024-11-20 10:42:03.495938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.495955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:113968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.495964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.495980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:113976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.495987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:113984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:113992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:114000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:114008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:114016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:114024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:114032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:114040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:114048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:114056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:114064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:114072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:114080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:114088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:114096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:114104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:114112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:114120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:114128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:114136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:114144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:114152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:114160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:114168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:114176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:114184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:114192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:114200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:114208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:38.990 [2024-11-20 10:42:03.496855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:114216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.990 [2024-11-20 10:42:03.496863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:03.496880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:114224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:03.496888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:03.496907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:114232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:03.496915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:03.496933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:114240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:03.496941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:03.496959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:114248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:03.496968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:03.496987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:114256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:03.496996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:03.497014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:114264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:03.497022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:03.497040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:114272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:03.497049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:38.991 11177.92 IOPS, 43.66 MiB/s [2024-11-20T09:42:19.722Z] 10379.50 IOPS, 40.54 MiB/s [2024-11-20T09:42:19.722Z] 9687.53 IOPS, 37.84 MiB/s [2024-11-20T09:42:19.722Z] 9236.56 IOPS, 36.08 MiB/s [2024-11-20T09:42:19.722Z] 9369.59 IOPS, 36.60 MiB/s [2024-11-20T09:42:19.722Z] 9480.56 IOPS, 37.03 MiB/s [2024-11-20T09:42:19.722Z] 9681.63 IOPS, 37.82 MiB/s [2024-11-20T09:42:19.722Z] 9879.55 IOPS, 38.59 MiB/s [2024-11-20T09:42:19.722Z] 10036.19 IOPS, 39.20 MiB/s [2024-11-20T09:42:19.722Z] 10097.77 IOPS, 39.44 MiB/s [2024-11-20T09:42:19.722Z] 10152.91 IOPS, 39.66 MiB/s [2024-11-20T09:42:19.722Z] 10238.00 IOPS, 39.99 MiB/s [2024-11-20T09:42:19.722Z] 10377.56 IOPS, 40.54 MiB/s [2024-11-20T09:42:19.722Z] 10507.73 IOPS, 41.05 MiB/s [2024-11-20T09:42:19.722Z] [2024-11-20 10:42:17.058089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:121936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.058125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.058159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:121952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.058172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.058185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:121968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.058193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.058211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:121984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.058218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.058231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:122000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.058238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.058250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:122016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.058257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.058269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:122032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.058277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.058289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:122048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.058296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:122064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.060025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:122080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.060050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:122096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.060070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:122112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.060090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:122128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.060110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:122144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.060133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:122160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.060152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:122176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.060173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:122192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.060192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:122208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.060218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:122224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.060238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:122240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.060257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:122256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.060277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:122272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.060299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:122288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.060318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:122304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.060338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:122320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.060357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:122336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.060376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:121888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.991 [2024-11-20 10:42:17.060400] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:121920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:38.991 [2024-11-20 10:42:17.060419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:38.991 [2024-11-20 10:42:17.060433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:122352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.991 [2024-11-20 10:42:17.060441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:38.992 [2024-11-20 10:42:17.060453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:122368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.992 [2024-11-20 10:42:17.060460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:38.992 [2024-11-20 10:42:17.060472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:122384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.992 [2024-11-20 10:42:17.060479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:38.992 [2024-11-20 10:42:17.060491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:122400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.992 [2024-11-20 10:42:17.060498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:38.992 [2024-11-20 10:42:17.060511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:122416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.992 [2024-11-20 10:42:17.060518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:38.992 [2024-11-20 10:42:17.060530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:122432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.992 [2024-11-20 10:42:17.060536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:38.992 [2024-11-20 10:42:17.060549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:122448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.992 [2024-11-20 10:42:17.060556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:38.992 [2024-11-20 10:42:17.060569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:122464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.992 [2024-11-20 10:42:17.060575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:38.992 [2024-11-20 10:42:17.060587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:122480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.992 [2024-11-20 10:42:17.060594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:38.992 [2024-11-20 10:42:17.060608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:122496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.992 [2024-11-20 10:42:17.060616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:38.992 [2024-11-20 10:42:17.060631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:122512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.992 [2024-11-20 10:42:17.060638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:38.992 [2024-11-20 10:42:17.060650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:122528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.992 [2024-11-20 10:42:17.060656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:38.992 [2024-11-20 10:42:17.060670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:122544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.992 [2024-11-20 10:42:17.060677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:38.992 [2024-11-20 10:42:17.060689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:122560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.992 [2024-11-20 10:42:17.060695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:38.992 [2024-11-20 10:42:17.060707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:122576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.992 [2024-11-20 10:42:17.060714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:38.992 [2024-11-20 10:42:17.060727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:122592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.992 [2024-11-20 10:42:17.060735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:38.992 [2024-11-20 10:42:17.060747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:122608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.992 [2024-11-20 10:42:17.060754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:38.992 [2024-11-20 10:42:17.060766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:122624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.992 [2024-11-20 10:42:17.060773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:38.992 [2024-11-20 10:42:17.060785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:122640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.992 [2024-11-20 10:42:17.060792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:38.992 [2024-11-20 10:42:17.060804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:122656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:38.992 [2024-11-20 10:42:17.060810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:38.992 10567.70 IOPS, 41.28 MiB/s [2024-11-20T09:42:19.723Z] 10593.68 IOPS, 41.38 MiB/s [2024-11-20T09:42:19.723Z] Received shutdown signal, test time was about 28.647501 seconds 00:24:38.992 00:24:38.992 Latency(us) 00:24:38.992 [2024-11-20T09:42:19.723Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:38.992 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:38.992 Verification LBA range: start 0x0 length 0x4000 00:24:38.992 Nvme0n1 : 28.65 10607.06 41.43 0.00 0.00 12045.00 265.26 3083812.08 00:24:38.992 [2024-11-20T09:42:19.723Z] =================================================================================================================== 00:24:38.992 [2024-11-20T09:42:19.723Z] Total : 10607.06 41.43 0.00 0.00 12045.00 265.26 3083812.08 00:24:38.992 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:39.250 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:39.250 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:39.250 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:39.250 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:39.250 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@99 -- # sync 00:24:39.250 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:39.250 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@102 -- # set +e 00:24:39.250 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:39.250 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:39.250 rmmod nvme_tcp 00:24:39.250 rmmod nvme_fabrics 00:24:39.250 rmmod nvme_keyring 00:24:39.250 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:39.250 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@106 -- # set -e 00:24:39.250 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@107 -- # return 0 00:24:39.251 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@336 -- # '[' -n 3327503 ']' 00:24:39.251 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@337 -- # killprocess 3327503 00:24:39.251 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # '[' -z 3327503 ']' 00:24:39.251 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # kill -0 3327503 00:24:39.251 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # uname 00:24:39.251 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:24:39.251 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3327503 00:24:39.251 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:24:39.251 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:24:39.251 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3327503' 00:24:39.251 killing process with pid 3327503 00:24:39.251 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@973 -- # kill 3327503 00:24:39.251 10:42:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@978 -- # wait 3327503 00:24:39.510 10:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:39.510 10:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # nvmf_fini 00:24:39.510 10:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@264 -- # local dev 00:24:39.510 10:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@267 -- # remove_target_ns 00:24:39.510 10:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:39.510 10:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:39.510 10:42:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@268 -- # delete_main_bridge 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@130 -- # return 0 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # _dev=0 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@41 -- # dev_map=() 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/setup.sh@284 -- # iptr 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@542 -- # iptables-save 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@542 -- # iptables-restore 00:24:41.412 00:24:41.412 real 0m40.513s 00:24:41.412 user 1m49.220s 00:24:41.412 sys 0m11.741s 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:41.412 10:42:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:41.412 ************************************ 00:24:41.412 END TEST nvmf_host_multipath_status 00:24:41.412 ************************************ 00:24:41.671 10:42:22 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:41.671 10:42:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:41.671 10:42:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:41.671 10:42:22 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:41.671 ************************************ 00:24:41.671 START TEST nvmf_identify_kernel_target 00:24:41.671 ************************************ 00:24:41.671 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:24:41.672 * Looking for test storage... 00:24:41.672 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lcov --version 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # IFS=.-: 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@336 -- # read -ra ver1 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # IFS=.-: 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@337 -- # read -ra ver2 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@338 -- # local 'op=<' 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@340 -- # ver1_l=2 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@341 -- # ver2_l=1 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@344 -- # case "$op" in 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@345 -- # : 1 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # decimal 1 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=1 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 1 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@365 -- # ver1[v]=1 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # decimal 2 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@353 -- # local d=2 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@355 -- # echo 2 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@366 -- # ver2[v]=2 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@368 -- # return 0 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:41.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.672 --rc genhtml_branch_coverage=1 00:24:41.672 --rc genhtml_function_coverage=1 00:24:41.672 --rc genhtml_legend=1 00:24:41.672 --rc geninfo_all_blocks=1 00:24:41.672 --rc geninfo_unexecuted_blocks=1 00:24:41.672 00:24:41.672 ' 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:41.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.672 --rc genhtml_branch_coverage=1 00:24:41.672 --rc genhtml_function_coverage=1 00:24:41.672 --rc genhtml_legend=1 00:24:41.672 --rc geninfo_all_blocks=1 00:24:41.672 --rc geninfo_unexecuted_blocks=1 00:24:41.672 00:24:41.672 ' 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:41.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.672 --rc genhtml_branch_coverage=1 00:24:41.672 --rc genhtml_function_coverage=1 00:24:41.672 --rc genhtml_legend=1 00:24:41.672 --rc geninfo_all_blocks=1 00:24:41.672 --rc geninfo_unexecuted_blocks=1 00:24:41.672 00:24:41.672 ' 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:41.672 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:41.672 --rc genhtml_branch_coverage=1 00:24:41.672 --rc genhtml_function_coverage=1 00:24:41.672 --rc genhtml_legend=1 00:24:41.672 --rc geninfo_all_blocks=1 00:24:41.672 --rc geninfo_unexecuted_blocks=1 00:24:41.672 00:24:41.672 ' 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:41.672 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:41.931 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:41.931 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@15 -- # shopt -s extglob 00:24:41.931 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:41.931 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:41.931 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:41.931 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@50 -- # : 0 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:41.932 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # remove_target_ns 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # xtrace_disable 00:24:41.932 10:42:22 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@131 -- # pci_devs=() 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@135 -- # net_devs=() 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@136 -- # e810=() 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@136 -- # local -ga e810 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@137 -- # x722=() 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@137 -- # local -ga x722 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@138 -- # mlx=() 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@138 -- # local -ga mlx 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:48.494 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:48.494 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:48.494 Found net devices under 0000:86:00.0: cvl_0_0 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:48.494 Found net devices under 0000:86:00.1: cvl_0_1 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@262 -- # is_hw=yes 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@257 -- # create_target_ns 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:48.494 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@28 -- # local -g _dev 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # ips=() 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772161 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:24:48.495 10.0.0.1 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@11 -- # local val=167772162 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:24:48.495 10.0.0.2 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:24:48.495 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:48.495 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.458 ms 00:24:48.495 00:24:48.495 --- 10.0.0.1 ping statistics --- 00:24:48.495 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.495 rtt min/avg/max/mdev = 0.458/0.458/0.458/0.000 ms 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:48.495 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target0 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:24:48.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:48.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.208 ms 00:24:48.496 00:24:48.496 --- 10.0.0.2 ping statistics --- 00:24:48.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:48.496 rtt min/avg/max/mdev = 0.208/0.208/0.208/0.000 ms 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # return 0 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # return 1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev= 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@169 -- # return 0 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target0 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@107 -- # local dev=target1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@109 -- # return 1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@168 -- # dev= 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@169 -- # return 0 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:48.496 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:48.497 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:24:48.497 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # local block nvme 00:24:48.497 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:24:48.497 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@444 -- # modprobe nvmet 00:24:48.497 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:24:48.497 10:42:28 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:24:51.029 Waiting for block devices as requested 00:24:51.029 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:24:51.029 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:51.029 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:51.029 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:51.029 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:51.029 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:51.289 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:51.289 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:51.289 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:51.289 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:24:51.548 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:24:51.548 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:24:51.548 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:24:51.807 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:24:51.807 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:24:51.807 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:24:51.807 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:24:52.066 No valid GPT data, bailing 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@394 -- # pt= 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@395 -- # return 1 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@469 -- # echo 1 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@471 -- # echo 1 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # echo tcp 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@475 -- # echo 4420 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@476 -- # echo ipv4 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:24:52.066 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:24:52.066 00:24:52.066 Discovery Log Number of Records 2, Generation counter 2 00:24:52.066 =====Discovery Log Entry 0====== 00:24:52.066 trtype: tcp 00:24:52.066 adrfam: ipv4 00:24:52.067 subtype: current discovery subsystem 00:24:52.067 treq: not specified, sq flow control disable supported 00:24:52.067 portid: 1 00:24:52.067 trsvcid: 4420 00:24:52.067 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:24:52.067 traddr: 10.0.0.1 00:24:52.067 eflags: none 00:24:52.067 sectype: none 00:24:52.067 =====Discovery Log Entry 1====== 00:24:52.067 trtype: tcp 00:24:52.067 adrfam: ipv4 00:24:52.067 subtype: nvme subsystem 00:24:52.067 treq: not specified, sq flow control disable supported 00:24:52.067 portid: 1 00:24:52.067 trsvcid: 4420 00:24:52.067 subnqn: nqn.2016-06.io.spdk:testnqn 00:24:52.067 traddr: 10.0.0.1 00:24:52.067 eflags: none 00:24:52.067 sectype: none 00:24:52.067 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:24:52.067 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:24:52.326 ===================================================== 00:24:52.326 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:24:52.326 ===================================================== 00:24:52.326 Controller Capabilities/Features 00:24:52.326 ================================ 00:24:52.326 Vendor ID: 0000 00:24:52.326 Subsystem Vendor ID: 0000 00:24:52.326 Serial Number: 475192c640ae018b3fd6 00:24:52.326 Model Number: Linux 00:24:52.326 Firmware Version: 6.8.9-20 00:24:52.326 Recommended Arb Burst: 0 00:24:52.326 IEEE OUI Identifier: 00 00 00 00:24:52.326 Multi-path I/O 00:24:52.326 May have multiple subsystem ports: No 00:24:52.326 May have multiple controllers: No 00:24:52.326 Associated with SR-IOV VF: No 00:24:52.326 Max Data Transfer Size: Unlimited 00:24:52.326 Max Number of Namespaces: 0 00:24:52.326 Max Number of I/O Queues: 1024 00:24:52.326 NVMe Specification Version (VS): 1.3 00:24:52.326 NVMe Specification Version (Identify): 1.3 00:24:52.326 Maximum Queue Entries: 1024 00:24:52.326 Contiguous Queues Required: No 00:24:52.326 Arbitration Mechanisms Supported 00:24:52.326 Weighted Round Robin: Not Supported 00:24:52.327 Vendor Specific: Not Supported 00:24:52.327 Reset Timeout: 7500 ms 00:24:52.327 Doorbell Stride: 4 bytes 00:24:52.327 NVM Subsystem Reset: Not Supported 00:24:52.327 Command Sets Supported 00:24:52.327 NVM Command Set: Supported 00:24:52.327 Boot Partition: Not Supported 00:24:52.327 Memory Page Size Minimum: 4096 bytes 00:24:52.327 Memory Page Size Maximum: 4096 bytes 00:24:52.327 Persistent Memory Region: Not Supported 00:24:52.327 Optional Asynchronous Events Supported 00:24:52.327 Namespace Attribute Notices: Not Supported 00:24:52.327 Firmware Activation Notices: Not Supported 00:24:52.327 ANA Change Notices: Not Supported 00:24:52.327 PLE Aggregate Log Change Notices: Not Supported 00:24:52.327 LBA Status Info Alert Notices: Not Supported 00:24:52.327 EGE Aggregate Log Change Notices: Not Supported 00:24:52.327 Normal NVM Subsystem Shutdown event: Not Supported 00:24:52.327 Zone Descriptor Change Notices: Not Supported 00:24:52.327 Discovery Log Change Notices: Supported 00:24:52.327 Controller Attributes 00:24:52.327 128-bit Host Identifier: Not Supported 00:24:52.327 Non-Operational Permissive Mode: Not Supported 00:24:52.327 NVM Sets: Not Supported 00:24:52.327 Read Recovery Levels: Not Supported 00:24:52.327 Endurance Groups: Not Supported 00:24:52.327 Predictable Latency Mode: Not Supported 00:24:52.327 Traffic Based Keep ALive: Not Supported 00:24:52.327 Namespace Granularity: Not Supported 00:24:52.327 SQ Associations: Not Supported 00:24:52.327 UUID List: Not Supported 00:24:52.327 Multi-Domain Subsystem: Not Supported 00:24:52.327 Fixed Capacity Management: Not Supported 00:24:52.327 Variable Capacity Management: Not Supported 00:24:52.327 Delete Endurance Group: Not Supported 00:24:52.327 Delete NVM Set: Not Supported 00:24:52.327 Extended LBA Formats Supported: Not Supported 00:24:52.327 Flexible Data Placement Supported: Not Supported 00:24:52.327 00:24:52.327 Controller Memory Buffer Support 00:24:52.327 ================================ 00:24:52.327 Supported: No 00:24:52.327 00:24:52.327 Persistent Memory Region Support 00:24:52.327 ================================ 00:24:52.327 Supported: No 00:24:52.327 00:24:52.327 Admin Command Set Attributes 00:24:52.327 ============================ 00:24:52.327 Security Send/Receive: Not Supported 00:24:52.327 Format NVM: Not Supported 00:24:52.327 Firmware Activate/Download: Not Supported 00:24:52.327 Namespace Management: Not Supported 00:24:52.327 Device Self-Test: Not Supported 00:24:52.327 Directives: Not Supported 00:24:52.327 NVMe-MI: Not Supported 00:24:52.327 Virtualization Management: Not Supported 00:24:52.327 Doorbell Buffer Config: Not Supported 00:24:52.327 Get LBA Status Capability: Not Supported 00:24:52.327 Command & Feature Lockdown Capability: Not Supported 00:24:52.327 Abort Command Limit: 1 00:24:52.327 Async Event Request Limit: 1 00:24:52.327 Number of Firmware Slots: N/A 00:24:52.327 Firmware Slot 1 Read-Only: N/A 00:24:52.327 Firmware Activation Without Reset: N/A 00:24:52.327 Multiple Update Detection Support: N/A 00:24:52.327 Firmware Update Granularity: No Information Provided 00:24:52.327 Per-Namespace SMART Log: No 00:24:52.327 Asymmetric Namespace Access Log Page: Not Supported 00:24:52.327 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:24:52.327 Command Effects Log Page: Not Supported 00:24:52.327 Get Log Page Extended Data: Supported 00:24:52.327 Telemetry Log Pages: Not Supported 00:24:52.327 Persistent Event Log Pages: Not Supported 00:24:52.327 Supported Log Pages Log Page: May Support 00:24:52.327 Commands Supported & Effects Log Page: Not Supported 00:24:52.327 Feature Identifiers & Effects Log Page:May Support 00:24:52.327 NVMe-MI Commands & Effects Log Page: May Support 00:24:52.327 Data Area 4 for Telemetry Log: Not Supported 00:24:52.327 Error Log Page Entries Supported: 1 00:24:52.327 Keep Alive: Not Supported 00:24:52.327 00:24:52.327 NVM Command Set Attributes 00:24:52.327 ========================== 00:24:52.327 Submission Queue Entry Size 00:24:52.327 Max: 1 00:24:52.327 Min: 1 00:24:52.327 Completion Queue Entry Size 00:24:52.327 Max: 1 00:24:52.327 Min: 1 00:24:52.327 Number of Namespaces: 0 00:24:52.327 Compare Command: Not Supported 00:24:52.327 Write Uncorrectable Command: Not Supported 00:24:52.327 Dataset Management Command: Not Supported 00:24:52.327 Write Zeroes Command: Not Supported 00:24:52.327 Set Features Save Field: Not Supported 00:24:52.327 Reservations: Not Supported 00:24:52.327 Timestamp: Not Supported 00:24:52.327 Copy: Not Supported 00:24:52.327 Volatile Write Cache: Not Present 00:24:52.327 Atomic Write Unit (Normal): 1 00:24:52.327 Atomic Write Unit (PFail): 1 00:24:52.327 Atomic Compare & Write Unit: 1 00:24:52.327 Fused Compare & Write: Not Supported 00:24:52.327 Scatter-Gather List 00:24:52.327 SGL Command Set: Supported 00:24:52.327 SGL Keyed: Not Supported 00:24:52.327 SGL Bit Bucket Descriptor: Not Supported 00:24:52.327 SGL Metadata Pointer: Not Supported 00:24:52.327 Oversized SGL: Not Supported 00:24:52.327 SGL Metadata Address: Not Supported 00:24:52.327 SGL Offset: Supported 00:24:52.327 Transport SGL Data Block: Not Supported 00:24:52.327 Replay Protected Memory Block: Not Supported 00:24:52.327 00:24:52.327 Firmware Slot Information 00:24:52.327 ========================= 00:24:52.327 Active slot: 0 00:24:52.327 00:24:52.327 00:24:52.327 Error Log 00:24:52.327 ========= 00:24:52.327 00:24:52.327 Active Namespaces 00:24:52.327 ================= 00:24:52.327 Discovery Log Page 00:24:52.327 ================== 00:24:52.327 Generation Counter: 2 00:24:52.327 Number of Records: 2 00:24:52.327 Record Format: 0 00:24:52.327 00:24:52.327 Discovery Log Entry 0 00:24:52.327 ---------------------- 00:24:52.327 Transport Type: 3 (TCP) 00:24:52.327 Address Family: 1 (IPv4) 00:24:52.327 Subsystem Type: 3 (Current Discovery Subsystem) 00:24:52.327 Entry Flags: 00:24:52.327 Duplicate Returned Information: 0 00:24:52.327 Explicit Persistent Connection Support for Discovery: 0 00:24:52.327 Transport Requirements: 00:24:52.327 Secure Channel: Not Specified 00:24:52.327 Port ID: 1 (0x0001) 00:24:52.327 Controller ID: 65535 (0xffff) 00:24:52.327 Admin Max SQ Size: 32 00:24:52.327 Transport Service Identifier: 4420 00:24:52.327 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:24:52.327 Transport Address: 10.0.0.1 00:24:52.327 Discovery Log Entry 1 00:24:52.327 ---------------------- 00:24:52.327 Transport Type: 3 (TCP) 00:24:52.327 Address Family: 1 (IPv4) 00:24:52.327 Subsystem Type: 2 (NVM Subsystem) 00:24:52.327 Entry Flags: 00:24:52.327 Duplicate Returned Information: 0 00:24:52.327 Explicit Persistent Connection Support for Discovery: 0 00:24:52.327 Transport Requirements: 00:24:52.327 Secure Channel: Not Specified 00:24:52.327 Port ID: 1 (0x0001) 00:24:52.327 Controller ID: 65535 (0xffff) 00:24:52.327 Admin Max SQ Size: 32 00:24:52.327 Transport Service Identifier: 4420 00:24:52.327 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:24:52.327 Transport Address: 10.0.0.1 00:24:52.327 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:24:52.327 get_feature(0x01) failed 00:24:52.327 get_feature(0x02) failed 00:24:52.327 get_feature(0x04) failed 00:24:52.327 ===================================================== 00:24:52.327 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:24:52.327 ===================================================== 00:24:52.327 Controller Capabilities/Features 00:24:52.327 ================================ 00:24:52.327 Vendor ID: 0000 00:24:52.327 Subsystem Vendor ID: 0000 00:24:52.327 Serial Number: 68ef26055e480087a874 00:24:52.327 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:24:52.327 Firmware Version: 6.8.9-20 00:24:52.327 Recommended Arb Burst: 6 00:24:52.327 IEEE OUI Identifier: 00 00 00 00:24:52.327 Multi-path I/O 00:24:52.327 May have multiple subsystem ports: Yes 00:24:52.327 May have multiple controllers: Yes 00:24:52.327 Associated with SR-IOV VF: No 00:24:52.327 Max Data Transfer Size: Unlimited 00:24:52.327 Max Number of Namespaces: 1024 00:24:52.327 Max Number of I/O Queues: 128 00:24:52.327 NVMe Specification Version (VS): 1.3 00:24:52.327 NVMe Specification Version (Identify): 1.3 00:24:52.327 Maximum Queue Entries: 1024 00:24:52.327 Contiguous Queues Required: No 00:24:52.327 Arbitration Mechanisms Supported 00:24:52.327 Weighted Round Robin: Not Supported 00:24:52.328 Vendor Specific: Not Supported 00:24:52.328 Reset Timeout: 7500 ms 00:24:52.328 Doorbell Stride: 4 bytes 00:24:52.328 NVM Subsystem Reset: Not Supported 00:24:52.328 Command Sets Supported 00:24:52.328 NVM Command Set: Supported 00:24:52.328 Boot Partition: Not Supported 00:24:52.328 Memory Page Size Minimum: 4096 bytes 00:24:52.328 Memory Page Size Maximum: 4096 bytes 00:24:52.328 Persistent Memory Region: Not Supported 00:24:52.328 Optional Asynchronous Events Supported 00:24:52.328 Namespace Attribute Notices: Supported 00:24:52.328 Firmware Activation Notices: Not Supported 00:24:52.328 ANA Change Notices: Supported 00:24:52.328 PLE Aggregate Log Change Notices: Not Supported 00:24:52.328 LBA Status Info Alert Notices: Not Supported 00:24:52.328 EGE Aggregate Log Change Notices: Not Supported 00:24:52.328 Normal NVM Subsystem Shutdown event: Not Supported 00:24:52.328 Zone Descriptor Change Notices: Not Supported 00:24:52.328 Discovery Log Change Notices: Not Supported 00:24:52.328 Controller Attributes 00:24:52.328 128-bit Host Identifier: Supported 00:24:52.328 Non-Operational Permissive Mode: Not Supported 00:24:52.328 NVM Sets: Not Supported 00:24:52.328 Read Recovery Levels: Not Supported 00:24:52.328 Endurance Groups: Not Supported 00:24:52.328 Predictable Latency Mode: Not Supported 00:24:52.328 Traffic Based Keep ALive: Supported 00:24:52.328 Namespace Granularity: Not Supported 00:24:52.328 SQ Associations: Not Supported 00:24:52.328 UUID List: Not Supported 00:24:52.328 Multi-Domain Subsystem: Not Supported 00:24:52.328 Fixed Capacity Management: Not Supported 00:24:52.328 Variable Capacity Management: Not Supported 00:24:52.328 Delete Endurance Group: Not Supported 00:24:52.328 Delete NVM Set: Not Supported 00:24:52.328 Extended LBA Formats Supported: Not Supported 00:24:52.328 Flexible Data Placement Supported: Not Supported 00:24:52.328 00:24:52.328 Controller Memory Buffer Support 00:24:52.328 ================================ 00:24:52.328 Supported: No 00:24:52.328 00:24:52.328 Persistent Memory Region Support 00:24:52.328 ================================ 00:24:52.328 Supported: No 00:24:52.328 00:24:52.328 Admin Command Set Attributes 00:24:52.328 ============================ 00:24:52.328 Security Send/Receive: Not Supported 00:24:52.328 Format NVM: Not Supported 00:24:52.328 Firmware Activate/Download: Not Supported 00:24:52.328 Namespace Management: Not Supported 00:24:52.328 Device Self-Test: Not Supported 00:24:52.328 Directives: Not Supported 00:24:52.328 NVMe-MI: Not Supported 00:24:52.328 Virtualization Management: Not Supported 00:24:52.328 Doorbell Buffer Config: Not Supported 00:24:52.328 Get LBA Status Capability: Not Supported 00:24:52.328 Command & Feature Lockdown Capability: Not Supported 00:24:52.328 Abort Command Limit: 4 00:24:52.328 Async Event Request Limit: 4 00:24:52.328 Number of Firmware Slots: N/A 00:24:52.328 Firmware Slot 1 Read-Only: N/A 00:24:52.328 Firmware Activation Without Reset: N/A 00:24:52.328 Multiple Update Detection Support: N/A 00:24:52.328 Firmware Update Granularity: No Information Provided 00:24:52.328 Per-Namespace SMART Log: Yes 00:24:52.328 Asymmetric Namespace Access Log Page: Supported 00:24:52.328 ANA Transition Time : 10 sec 00:24:52.328 00:24:52.328 Asymmetric Namespace Access Capabilities 00:24:52.328 ANA Optimized State : Supported 00:24:52.328 ANA Non-Optimized State : Supported 00:24:52.328 ANA Inaccessible State : Supported 00:24:52.328 ANA Persistent Loss State : Supported 00:24:52.328 ANA Change State : Supported 00:24:52.328 ANAGRPID is not changed : No 00:24:52.328 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:24:52.328 00:24:52.328 ANA Group Identifier Maximum : 128 00:24:52.328 Number of ANA Group Identifiers : 128 00:24:52.328 Max Number of Allowed Namespaces : 1024 00:24:52.328 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:24:52.328 Command Effects Log Page: Supported 00:24:52.328 Get Log Page Extended Data: Supported 00:24:52.328 Telemetry Log Pages: Not Supported 00:24:52.328 Persistent Event Log Pages: Not Supported 00:24:52.328 Supported Log Pages Log Page: May Support 00:24:52.328 Commands Supported & Effects Log Page: Not Supported 00:24:52.328 Feature Identifiers & Effects Log Page:May Support 00:24:52.328 NVMe-MI Commands & Effects Log Page: May Support 00:24:52.328 Data Area 4 for Telemetry Log: Not Supported 00:24:52.328 Error Log Page Entries Supported: 128 00:24:52.328 Keep Alive: Supported 00:24:52.328 Keep Alive Granularity: 1000 ms 00:24:52.328 00:24:52.328 NVM Command Set Attributes 00:24:52.328 ========================== 00:24:52.328 Submission Queue Entry Size 00:24:52.328 Max: 64 00:24:52.328 Min: 64 00:24:52.328 Completion Queue Entry Size 00:24:52.328 Max: 16 00:24:52.328 Min: 16 00:24:52.328 Number of Namespaces: 1024 00:24:52.328 Compare Command: Not Supported 00:24:52.328 Write Uncorrectable Command: Not Supported 00:24:52.328 Dataset Management Command: Supported 00:24:52.328 Write Zeroes Command: Supported 00:24:52.328 Set Features Save Field: Not Supported 00:24:52.328 Reservations: Not Supported 00:24:52.328 Timestamp: Not Supported 00:24:52.328 Copy: Not Supported 00:24:52.328 Volatile Write Cache: Present 00:24:52.328 Atomic Write Unit (Normal): 1 00:24:52.328 Atomic Write Unit (PFail): 1 00:24:52.328 Atomic Compare & Write Unit: 1 00:24:52.328 Fused Compare & Write: Not Supported 00:24:52.328 Scatter-Gather List 00:24:52.328 SGL Command Set: Supported 00:24:52.328 SGL Keyed: Not Supported 00:24:52.328 SGL Bit Bucket Descriptor: Not Supported 00:24:52.328 SGL Metadata Pointer: Not Supported 00:24:52.328 Oversized SGL: Not Supported 00:24:52.328 SGL Metadata Address: Not Supported 00:24:52.328 SGL Offset: Supported 00:24:52.328 Transport SGL Data Block: Not Supported 00:24:52.328 Replay Protected Memory Block: Not Supported 00:24:52.328 00:24:52.328 Firmware Slot Information 00:24:52.328 ========================= 00:24:52.328 Active slot: 0 00:24:52.328 00:24:52.328 Asymmetric Namespace Access 00:24:52.328 =========================== 00:24:52.328 Change Count : 0 00:24:52.328 Number of ANA Group Descriptors : 1 00:24:52.328 ANA Group Descriptor : 0 00:24:52.328 ANA Group ID : 1 00:24:52.328 Number of NSID Values : 1 00:24:52.328 Change Count : 0 00:24:52.328 ANA State : 1 00:24:52.328 Namespace Identifier : 1 00:24:52.328 00:24:52.328 Commands Supported and Effects 00:24:52.328 ============================== 00:24:52.328 Admin Commands 00:24:52.328 -------------- 00:24:52.328 Get Log Page (02h): Supported 00:24:52.328 Identify (06h): Supported 00:24:52.328 Abort (08h): Supported 00:24:52.328 Set Features (09h): Supported 00:24:52.328 Get Features (0Ah): Supported 00:24:52.328 Asynchronous Event Request (0Ch): Supported 00:24:52.328 Keep Alive (18h): Supported 00:24:52.328 I/O Commands 00:24:52.328 ------------ 00:24:52.328 Flush (00h): Supported 00:24:52.328 Write (01h): Supported LBA-Change 00:24:52.328 Read (02h): Supported 00:24:52.328 Write Zeroes (08h): Supported LBA-Change 00:24:52.328 Dataset Management (09h): Supported 00:24:52.328 00:24:52.328 Error Log 00:24:52.328 ========= 00:24:52.328 Entry: 0 00:24:52.328 Error Count: 0x3 00:24:52.328 Submission Queue Id: 0x0 00:24:52.328 Command Id: 0x5 00:24:52.328 Phase Bit: 0 00:24:52.328 Status Code: 0x2 00:24:52.328 Status Code Type: 0x0 00:24:52.328 Do Not Retry: 1 00:24:52.328 Error Location: 0x28 00:24:52.328 LBA: 0x0 00:24:52.328 Namespace: 0x0 00:24:52.328 Vendor Log Page: 0x0 00:24:52.328 ----------- 00:24:52.328 Entry: 1 00:24:52.328 Error Count: 0x2 00:24:52.328 Submission Queue Id: 0x0 00:24:52.328 Command Id: 0x5 00:24:52.328 Phase Bit: 0 00:24:52.328 Status Code: 0x2 00:24:52.328 Status Code Type: 0x0 00:24:52.328 Do Not Retry: 1 00:24:52.328 Error Location: 0x28 00:24:52.328 LBA: 0x0 00:24:52.328 Namespace: 0x0 00:24:52.328 Vendor Log Page: 0x0 00:24:52.328 ----------- 00:24:52.328 Entry: 2 00:24:52.328 Error Count: 0x1 00:24:52.328 Submission Queue Id: 0x0 00:24:52.328 Command Id: 0x4 00:24:52.328 Phase Bit: 0 00:24:52.328 Status Code: 0x2 00:24:52.328 Status Code Type: 0x0 00:24:52.328 Do Not Retry: 1 00:24:52.328 Error Location: 0x28 00:24:52.328 LBA: 0x0 00:24:52.329 Namespace: 0x0 00:24:52.329 Vendor Log Page: 0x0 00:24:52.329 00:24:52.329 Number of Queues 00:24:52.329 ================ 00:24:52.329 Number of I/O Submission Queues: 128 00:24:52.329 Number of I/O Completion Queues: 128 00:24:52.329 00:24:52.329 ZNS Specific Controller Data 00:24:52.329 ============================ 00:24:52.329 Zone Append Size Limit: 0 00:24:52.329 00:24:52.329 00:24:52.329 Active Namespaces 00:24:52.329 ================= 00:24:52.329 get_feature(0x05) failed 00:24:52.329 Namespace ID:1 00:24:52.329 Command Set Identifier: NVM (00h) 00:24:52.329 Deallocate: Supported 00:24:52.329 Deallocated/Unwritten Error: Not Supported 00:24:52.329 Deallocated Read Value: Unknown 00:24:52.329 Deallocate in Write Zeroes: Not Supported 00:24:52.329 Deallocated Guard Field: 0xFFFF 00:24:52.329 Flush: Supported 00:24:52.329 Reservation: Not Supported 00:24:52.329 Namespace Sharing Capabilities: Multiple Controllers 00:24:52.329 Size (in LBAs): 3125627568 (1490GiB) 00:24:52.329 Capacity (in LBAs): 3125627568 (1490GiB) 00:24:52.329 Utilization (in LBAs): 3125627568 (1490GiB) 00:24:52.329 UUID: 160ae273-90f0-4a59-a344-980aa2992120 00:24:52.329 Thin Provisioning: Not Supported 00:24:52.329 Per-NS Atomic Units: Yes 00:24:52.329 Atomic Boundary Size (Normal): 0 00:24:52.329 Atomic Boundary Size (PFail): 0 00:24:52.329 Atomic Boundary Offset: 0 00:24:52.329 NGUID/EUI64 Never Reused: No 00:24:52.329 ANA group ID: 1 00:24:52.329 Namespace Write Protected: No 00:24:52.329 Number of LBA Formats: 1 00:24:52.329 Current LBA Format: LBA Format #00 00:24:52.329 LBA Format #00: Data Size: 512 Metadata Size: 0 00:24:52.329 00:24:52.329 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:24:52.329 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:24:52.329 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@99 -- # sync 00:24:52.329 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:24:52.329 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@102 -- # set +e 00:24:52.329 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:24:52.329 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:24:52.329 rmmod nvme_tcp 00:24:52.329 rmmod nvme_fabrics 00:24:52.329 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:24:52.329 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@106 -- # set -e 00:24:52.329 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@107 -- # return 0 00:24:52.329 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@336 -- # '[' -n '' ']' 00:24:52.329 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:24:52.329 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # nvmf_fini 00:24:52.329 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@264 -- # local dev 00:24:52.329 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:24:52.329 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:52.329 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:52.329 10:42:32 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:54.861 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:24:54.861 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:24:54.861 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@130 -- # return 0 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # _dev=0 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@41 -- # dev_map=() 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/setup.sh@284 -- # iptr 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@542 -- # iptables-save 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@542 -- # iptables-restore 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # echo 0 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:24:54.862 10:42:35 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:24:57.402 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:57.402 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:57.402 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:57.402 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:57.402 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:57.402 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:57.402 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:57.402 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:57.402 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:24:57.402 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:24:57.402 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:24:57.402 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:24:57.403 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:24:57.403 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:24:57.403 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:24:57.403 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:24:58.877 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:24:58.877 00:24:58.877 real 0m17.353s 00:24:58.877 user 0m4.428s 00:24:58.877 sys 0m8.836s 00:24:58.877 10:42:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:24:58.877 10:42:39 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:24:58.877 ************************************ 00:24:58.877 END TEST nvmf_identify_kernel_target 00:24:58.877 ************************************ 00:24:58.877 10:42:39 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:58.877 10:42:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:24:58.877 10:42:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:24:58.877 10:42:39 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:59.230 ************************************ 00:24:59.230 START TEST nvmf_auth_host 00:24:59.230 ************************************ 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:24:59.230 * Looking for test storage... 00:24:59.230 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lcov --version 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@333 -- # local ver1 ver1_l 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@334 -- # local ver2 ver2_l 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # IFS=.-: 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@336 -- # read -ra ver1 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # IFS=.-: 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@337 -- # read -ra ver2 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@338 -- # local 'op=<' 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@340 -- # ver1_l=2 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@341 -- # ver2_l=1 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@344 -- # case "$op" in 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@345 -- # : 1 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v = 0 )) 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # decimal 1 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=1 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 1 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@365 -- # ver1[v]=1 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # decimal 2 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@353 -- # local d=2 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@355 -- # echo 2 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@366 -- # ver2[v]=2 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@368 -- # return 0 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:24:59.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.230 --rc genhtml_branch_coverage=1 00:24:59.230 --rc genhtml_function_coverage=1 00:24:59.230 --rc genhtml_legend=1 00:24:59.230 --rc geninfo_all_blocks=1 00:24:59.230 --rc geninfo_unexecuted_blocks=1 00:24:59.230 00:24:59.230 ' 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:24:59.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.230 --rc genhtml_branch_coverage=1 00:24:59.230 --rc genhtml_function_coverage=1 00:24:59.230 --rc genhtml_legend=1 00:24:59.230 --rc geninfo_all_blocks=1 00:24:59.230 --rc geninfo_unexecuted_blocks=1 00:24:59.230 00:24:59.230 ' 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:24:59.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.230 --rc genhtml_branch_coverage=1 00:24:59.230 --rc genhtml_function_coverage=1 00:24:59.230 --rc genhtml_legend=1 00:24:59.230 --rc geninfo_all_blocks=1 00:24:59.230 --rc geninfo_unexecuted_blocks=1 00:24:59.230 00:24:59.230 ' 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:24:59.230 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:24:59.230 --rc genhtml_branch_coverage=1 00:24:59.230 --rc genhtml_function_coverage=1 00:24:59.230 --rc genhtml_legend=1 00:24:59.230 --rc geninfo_all_blocks=1 00:24:59.230 --rc geninfo_unexecuted_blocks=1 00:24:59.230 00:24:59.230 ' 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@15 -- # shopt -s extglob 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.230 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@50 -- # : 0 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:24:59.231 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@54 -- # have_pci_nics=0 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # prepare_net_devs 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # local -g is_hw=no 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # remove_target_ns 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # xtrace_disable 00:24:59.231 10:42:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@131 -- # pci_devs=() 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@131 -- # local -a pci_devs 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@132 -- # pci_net_devs=() 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@133 -- # pci_drivers=() 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@133 -- # local -A pci_drivers 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@135 -- # net_devs=() 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@135 -- # local -ga net_devs 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@136 -- # e810=() 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@136 -- # local -ga e810 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@137 -- # x722=() 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@137 -- # local -ga x722 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@138 -- # mlx=() 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@138 -- # local -ga mlx 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:05.797 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:05.797 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:05.798 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:05.798 Found net devices under 0000:86:00.0: cvl_0_0 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # [[ up == up ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:05.798 Found net devices under 0000:86:00.1: cvl_0_1 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@262 -- # is_hw=yes 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@257 -- # create_target_ns 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@27 -- # local -gA dev_map 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@28 -- # local -g _dev 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # ips=() 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772161 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:25:05.798 10.0.0.1 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@11 -- # local val=167772162 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:25:05.798 10.0.0.2 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:05.798 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@38 -- # ping_ips 1 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:25:05.799 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:05.799 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.429 ms 00:25:05.799 00:25:05.799 --- 10.0.0.1 ping statistics --- 00:25:05.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.799 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target0 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:25:05.799 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:05.799 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:25:05.799 00:25:05.799 --- 10.0.0.2 ping statistics --- 00:25:05.799 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:05.799 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair++ )) 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # return 0 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=initiator0 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=initiator1 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # return 1 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev= 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@169 -- # return 0 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target0 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target0 00:25:05.799 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # get_net_dev target1 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@107 -- # local dev=target1 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@109 -- # return 1 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@168 -- # dev= 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@169 -- # return 0 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@726 -- # xtrace_disable 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@328 -- # nvmfpid=3343620 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # waitforlisten 3343620 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3343620 ']' 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:05.800 10:42:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@732 -- # xtrace_disable 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=86a4a2b673896245a82ac9c95a684fb5 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.EBu 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 86a4a2b673896245a82ac9c95a684fb5 0 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 86a4a2b673896245a82ac9c95a684fb5 0 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=86a4a2b673896245a82ac9c95a684fb5 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.EBu 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.EBu 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.EBu 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=c90cda784e4eb6ae4072f36df889ca34eb5f273b12ec5f6ef86d37bfb2f94755 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.DvL 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key c90cda784e4eb6ae4072f36df889ca34eb5f273b12ec5f6ef86d37bfb2f94755 3 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 c90cda784e4eb6ae4072f36df889ca34eb5f273b12ec5f6ef86d37bfb2f94755 3 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=c90cda784e4eb6ae4072f36df889ca34eb5f273b12ec5f6ef86d37bfb2f94755 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.DvL 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.DvL 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.DvL 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=d9151b9ea682ed593846c65966e5fefdb62e9d6ca7959aa7 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.u8P 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key d9151b9ea682ed593846c65966e5fefdb62e9d6ca7959aa7 0 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 d9151b9ea682ed593846c65966e5fefdb62e9d6ca7959aa7 0 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=d9151b9ea682ed593846c65966e5fefdb62e9d6ca7959aa7 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:25:05.800 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.u8P 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.u8P 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.u8P 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=def3f5016ecac0b54fe59670f86a92ea2da1d1f02da7ee89 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.yqd 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key def3f5016ecac0b54fe59670f86a92ea2da1d1f02da7ee89 2 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 def3f5016ecac0b54fe59670f86a92ea2da1d1f02da7ee89 2 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=def3f5016ecac0b54fe59670f86a92ea2da1d1f02da7ee89 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.yqd 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.yqd 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.yqd 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=c72c29ece0ce1229fb3290c72b0725c1 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.Edf 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key c72c29ece0ce1229fb3290c72b0725c1 1 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 c72c29ece0ce1229fb3290c72b0725c1 1 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=c72c29ece0ce1229fb3290c72b0725c1 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.Edf 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.Edf 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.Edf 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha256 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:25:05.801 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:06.060 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=bfcdba8b11754e87dacfe40b981b9177 00:25:06.060 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha256.XXX 00:25:06.060 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha256.edi 00:25:06.060 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key bfcdba8b11754e87dacfe40b981b9177 1 00:25:06.060 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 bfcdba8b11754e87dacfe40b981b9177 1 00:25:06.060 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:25:06.060 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:25:06.060 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=bfcdba8b11754e87dacfe40b981b9177 00:25:06.060 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=1 00:25:06.060 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:25:06.060 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha256.edi 00:25:06.060 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha256.edi 00:25:06.060 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.edi 00:25:06.060 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:06.060 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:25:06.060 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:06.060 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:25:06.060 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha384 00:25:06.060 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=48 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=0d7530ff50b52908bf953ce3aeeb9b29e196490d0b2e71c0 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha384.XXX 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha384.RpU 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 0d7530ff50b52908bf953ce3aeeb9b29e196490d0b2e71c0 2 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 0d7530ff50b52908bf953ce3aeeb9b29e196490d0b2e71c0 2 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=0d7530ff50b52908bf953ce3aeeb9b29e196490d0b2e71c0 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=2 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha384.RpU 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha384.RpU 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.RpU 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=null 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=32 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=5628bc04ffbdb36f9199972fa5612af9 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-null.XXX 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-null.Ku1 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 5628bc04ffbdb36f9199972fa5612af9 0 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 5628bc04ffbdb36f9199972fa5612af9 0 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=5628bc04ffbdb36f9199972fa5612af9 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=0 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-null.Ku1 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-null.Ku1 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.Ku1 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@525 -- # local digest len file key 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@526 -- # local -A digests 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # digest=sha512 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@528 -- # len=64 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@529 -- # key=8ccd13752d2a7116247e70e38c2f94514485df32499c16561abc224c7782c85f 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # mktemp -t spdk.key-sha512.XXX 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@530 -- # file=/tmp/spdk.key-sha512.Rol 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@531 -- # format_dhchap_key 8ccd13752d2a7116247e70e38c2f94514485df32499c16561abc224c7782c85f 3 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@521 -- # format_key DHHC-1 8ccd13752d2a7116247e70e38c2f94514485df32499c16561abc224c7782c85f 3 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@504 -- # local prefix key digest 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # prefix=DHHC-1 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # key=8ccd13752d2a7116247e70e38c2f94514485df32499c16561abc224c7782c85f 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@506 -- # digest=3 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@507 -- # python - 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@532 -- # chmod 0600 /tmp/spdk.key-sha512.Rol 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@534 -- # echo /tmp/spdk.key-sha512.Rol 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.Rol 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 3343620 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@835 -- # '[' -z 3343620 ']' 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:06.061 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:06.061 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@868 -- # return 0 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.EBu 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.DvL ]] 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.DvL 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.u8P 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.yqd ]] 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.yqd 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.Edf 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.edi ]] 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.edi 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.RpU 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.320 10:42:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.Ku1 ]] 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.Ku1 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.Rol 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@434 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # local block nvme 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@444 -- # modprobe nvmet 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:06.320 10:42:47 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:09.605 Waiting for block devices as requested 00:25:09.605 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:09.605 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:09.605 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:09.605 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:09.605 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:09.605 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:09.605 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:09.605 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:09.605 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:09.863 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:09.863 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:09.863 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:10.121 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:10.121 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:10.121 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:10.121 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:10.379 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:10.945 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:25:10.945 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:10.945 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:25:10.945 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:25:10.945 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:10.945 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:25:10.945 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:25:10.945 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:25:10.945 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:10.945 No valid GPT data, bailing 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@394 -- # pt= 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@395 -- # return 1 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@467 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@469 -- # echo 1 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@471 -- # echo 1 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # echo tcp 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@475 -- # echo 4420 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@476 -- # echo ipv4 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:10.946 00:25:10.946 Discovery Log Number of Records 2, Generation counter 2 00:25:10.946 =====Discovery Log Entry 0====== 00:25:10.946 trtype: tcp 00:25:10.946 adrfam: ipv4 00:25:10.946 subtype: current discovery subsystem 00:25:10.946 treq: not specified, sq flow control disable supported 00:25:10.946 portid: 1 00:25:10.946 trsvcid: 4420 00:25:10.946 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:10.946 traddr: 10.0.0.1 00:25:10.946 eflags: none 00:25:10.946 sectype: none 00:25:10.946 =====Discovery Log Entry 1====== 00:25:10.946 trtype: tcp 00:25:10.946 adrfam: ipv4 00:25:10.946 subtype: nvme subsystem 00:25:10.946 treq: not specified, sq flow control disable supported 00:25:10.946 portid: 1 00:25:10.946 trsvcid: 4420 00:25:10.946 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:10.946 traddr: 10.0.0.1 00:25:10.946 eflags: none 00:25:10.946 sectype: none 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: ]] 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:10.946 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.205 nvme0n1 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: ]] 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.205 10:42:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.464 nvme0n1 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: ]] 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.464 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.723 nvme0n1 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: ]] 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.723 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.982 nvme0n1 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: ]] 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:11.982 nvme0n1 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:11.982 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.240 nvme0n1 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.240 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: ]] 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.499 10:42:52 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.499 nvme0n1 00:25:12.499 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.499 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.499 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.499 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.499 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.499 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.499 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.499 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.499 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.499 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: ]] 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.758 nvme0n1 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:12.758 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: ]] 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.017 nvme0n1 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:13.017 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: ]] 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.276 nvme0n1 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:13.276 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:13.277 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:13.277 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.277 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:13.277 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:13.277 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:13.277 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:13.277 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.277 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.277 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:13.277 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:13.277 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.277 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:13.277 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.277 10:42:53 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.535 nvme0n1 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: ]] 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:13.535 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:13.536 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:13.536 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:13.536 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.536 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.536 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.536 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:13.536 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.536 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.794 nvme0n1 00:25:13.794 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:13.794 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:13.794 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:13.794 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:13.794 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:13.794 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: ]] 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.052 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.310 nvme0n1 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: ]] 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.310 10:42:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.569 nvme0n1 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: ]] 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.569 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.827 nvme0n1 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:14.827 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.085 nvme0n1 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: ]] 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.085 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.342 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.342 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:15.342 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.342 10:42:55 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.600 nvme0n1 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: ]] 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.600 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:15.601 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:15.601 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:15.601 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:15.601 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.166 nvme0n1 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: ]] 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.166 10:42:56 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.424 nvme0n1 00:25:16.424 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.424 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.424 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.424 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.424 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.424 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.424 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.424 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.424 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.424 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.424 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.424 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.424 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:16.424 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.424 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.424 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.424 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: ]] 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.425 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.991 nvme0n1 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:16.991 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.249 nvme0n1 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:17.249 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:17.507 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:17.507 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:17.507 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:17.507 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: ]] 00:25:17.507 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:17.507 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:17.507 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:17.507 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:17.507 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:17.507 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:17.507 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:17.507 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:17.507 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.507 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:17.507 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:17.507 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:17.507 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:17.507 10:42:57 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.072 nvme0n1 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: ]] 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.072 10:42:58 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.636 nvme0n1 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: ]] 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:18.636 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:18.637 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:18.637 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:18.637 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.637 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:18.637 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:18.637 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:18.637 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:18.637 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.201 nvme0n1 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:19.201 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:19.202 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: ]] 00:25:19.202 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:19.202 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:19.202 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:19.202 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:19.202 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:19.202 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:19.202 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:19.202 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:19.202 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.202 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:19.460 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:19.460 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:19.460 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:19.460 10:42:59 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.025 nvme0n1 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:20.025 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.026 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:20.026 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:20.026 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:20.026 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.026 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:20.026 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.026 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.026 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.026 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:20.026 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.026 10:43:00 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.591 nvme0n1 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: ]] 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.591 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.850 nvme0n1 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: ]] 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:20.850 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.109 nvme0n1 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: ]] 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.109 nvme0n1 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.109 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: ]] 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.367 10:43:01 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.367 nvme0n1 00:25:21.367 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.367 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.367 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.367 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.367 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.367 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.367 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.367 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.367 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.367 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.641 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.641 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.641 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:21.641 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.642 nvme0n1 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:21.642 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: ]] 00:25:21.643 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:21.643 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:21.643 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.643 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.643 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.643 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:21.643 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.643 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:21.643 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.643 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.643 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.643 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:21.643 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.643 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.901 nvme0n1 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: ]] 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:21.901 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.158 nvme0n1 00:25:22.158 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: ]] 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.159 10:43:02 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.416 nvme0n1 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:22.416 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.417 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.417 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:22.417 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: ]] 00:25:22.417 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:22.417 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:22.417 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.417 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.417 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.417 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:22.417 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.417 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:22.417 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.417 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.417 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.417 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:22.417 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.417 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.675 nvme0n1 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:22.675 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:22.676 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.676 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:22.676 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.676 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.676 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.676 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:22.676 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.676 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.934 nvme0n1 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: ]] 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:22.934 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.191 nvme0n1 00:25:23.191 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.191 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.191 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.191 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.192 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.192 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.192 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.192 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.192 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.192 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.192 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: ]] 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.450 10:43:03 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.708 nvme0n1 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: ]] 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.708 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.966 nvme0n1 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: ]] 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:23.966 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:23.967 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:23.967 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:23.967 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.224 nvme0n1 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.224 10:43:04 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.481 nvme0n1 00:25:24.481 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.481 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:24.481 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:24.481 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.481 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.481 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.481 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:24.481 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:24.481 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.481 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.481 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.481 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:24.481 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:24.481 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:24.481 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:24.481 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: ]] 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:24.482 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.046 nvme0n1 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: ]] 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:25.046 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.047 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:25.047 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.047 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.047 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.047 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:25.047 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.047 10:43:05 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.304 nvme0n1 00:25:25.304 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.304 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.304 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.304 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.304 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.304 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: ]] 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.563 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.821 nvme0n1 00:25:25.821 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.821 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:25.821 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:25.821 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.821 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.821 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.821 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:25.821 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:25.821 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.821 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.821 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.821 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: ]] 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:25.822 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.388 nvme0n1 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.388 10:43:06 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.646 nvme0n1 00:25:26.646 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.646 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:26.646 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:26.646 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.646 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.646 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.646 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:26.646 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:26.646 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: ]] 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:26.905 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.472 nvme0n1 00:25:27.472 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.472 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:27.472 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:27.472 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.472 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.472 10:43:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: ]] 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:27.472 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.038 nvme0n1 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: ]] 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.038 10:43:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.603 nvme0n1 00:25:28.603 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.603 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:28.603 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.603 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:28.603 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.603 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: ]] 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:28.860 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.426 nvme0n1 00:25:29.426 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.426 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.426 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.426 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.426 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.426 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.426 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.426 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.426 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.426 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.427 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.427 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.427 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:29.427 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.427 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:29.427 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:29.427 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:29.427 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:29.427 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:29.427 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:29.427 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:29.427 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:29.427 10:43:09 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:29.427 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:29.427 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.427 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:29.427 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:29.427 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:29.427 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.427 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:29.427 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.427 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.427 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.427 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:29.427 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.427 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.993 nvme0n1 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:29.993 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:29.994 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:29.994 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: ]] 00:25:29.994 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:29.994 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:29.994 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:29.994 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:29.994 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:29.994 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:29.994 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:29.994 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:29.994 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.994 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:29.994 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:29.994 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:29.994 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:29.994 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.252 nvme0n1 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: ]] 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.252 10:43:10 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.510 nvme0n1 00:25:30.510 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.510 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.510 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.510 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.510 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.510 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.510 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.510 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: ]] 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.511 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.769 nvme0n1 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: ]] 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:30.769 nvme0n1 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:30.769 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.027 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.028 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.028 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:31.028 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.028 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.028 nvme0n1 00:25:31.028 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.028 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.028 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.028 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.028 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.028 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.028 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.028 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.028 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.028 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: ]] 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.286 nvme0n1 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.286 10:43:11 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.286 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.286 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.286 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.286 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.544 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.544 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.544 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:31.544 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.544 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.544 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.544 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.544 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:31.544 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:31.544 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.544 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: ]] 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.545 nvme0n1 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.545 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: ]] 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.803 nvme0n1 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:31.803 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: ]] 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.061 nvme0n1 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.061 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.062 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:32.062 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.062 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:32.062 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.062 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.062 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:32.062 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:32.062 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.062 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:32.062 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.062 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.062 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:32.062 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.062 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.062 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:32.062 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.062 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.319 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.319 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.319 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.319 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.319 nvme0n1 00:25:32.319 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.319 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.319 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.319 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.319 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.319 10:43:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: ]] 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.320 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.577 nvme0n1 00:25:32.577 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: ]] 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:32.835 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.093 nvme0n1 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: ]] 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.093 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.352 nvme0n1 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: ]] 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.352 10:43:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.352 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.352 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.352 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.352 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.610 nvme0n1 00:25:33.610 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.610 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.610 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.610 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.610 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.610 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.610 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.610 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.610 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.610 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.610 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.610 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.610 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:33.610 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.610 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:33.610 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:33.610 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:33.610 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:33.610 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:33.611 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:33.611 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:33.611 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:33.611 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:33.611 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:33.611 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.611 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:33.611 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:33.611 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:33.611 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.611 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:33.611 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.611 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.611 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.611 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.611 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.611 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.869 nvme0n1 00:25:33.869 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:33.869 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.869 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.869 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:33.869 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.869 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: ]] 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:34.127 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.128 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.128 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.128 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.128 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.128 10:43:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.386 nvme0n1 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: ]] 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.386 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.953 nvme0n1 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: ]] 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:34.953 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.519 nvme0n1 00:25:35.519 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.519 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.519 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.519 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.519 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.519 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.520 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.520 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.520 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.520 10:43:15 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: ]] 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.520 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.778 nvme0n1 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:35.778 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.345 nvme0n1 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ODZhNGEyYjY3Mzg5NjI0NWE4MmFjOWM5NWE2ODRmYjWUEzvx: 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: ]] 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:YzkwY2RhNzg0ZTRlYjZhZTQwNzJmMzZkZjg4OWNhMzRlYjVmMjczYjEyZWM1ZjZlZjg2ZDM3YmZiMmY5NDc1NfMcsgQ=: 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.345 10:43:16 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.910 nvme0n1 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: ]] 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:36.910 10:43:17 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.475 nvme0n1 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: ]] 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:37.475 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.408 nvme0n1 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGQ3NTMwZmY1MGI1MjkwOGJmOTUzY2UzYWVlYjliMjllMTk2NDkwZDBiMmU3MWMwlrOJ+Q==: 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: ]] 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:NTYyOGJjMDRmZmJkYjM2ZjkxOTk5NzJmYTU2MTJhZjnfBeUg: 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.408 10:43:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.972 nvme0n1 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:OGNjZDEzNzUyZDJhNzExNjI0N2U3MGUzOGMyZjk0NTE0NDg1ZGYzMjQ5OWMxNjU2MWFiYzIyNGM3NzgyYzg1Zs+jpEs=: 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:38.972 10:43:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.537 nvme0n1 00:25:39.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:39.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:39.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:39.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:39.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:39.537 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: ]] 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.538 request: 00:25:39.538 { 00:25:39.538 "name": "nvme0", 00:25:39.538 "trtype": "tcp", 00:25:39.538 "traddr": "10.0.0.1", 00:25:39.538 "adrfam": "ipv4", 00:25:39.538 "trsvcid": "4420", 00:25:39.538 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:39.538 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:39.538 "prchk_reftag": false, 00:25:39.538 "prchk_guard": false, 00:25:39.538 "hdgst": false, 00:25:39.538 "ddgst": false, 00:25:39.538 "allow_unrecognized_csi": false, 00:25:39.538 "method": "bdev_nvme_attach_controller", 00:25:39.538 "req_id": 1 00:25:39.538 } 00:25:39.538 Got JSON-RPC error response 00:25:39.538 response: 00:25:39.538 { 00:25:39.538 "code": -5, 00:25:39.538 "message": "Input/output error" 00:25:39.538 } 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.538 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.796 request: 00:25:39.796 { 00:25:39.796 "name": "nvme0", 00:25:39.796 "trtype": "tcp", 00:25:39.796 "traddr": "10.0.0.1", 00:25:39.796 "adrfam": "ipv4", 00:25:39.796 "trsvcid": "4420", 00:25:39.796 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:39.796 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:39.796 "prchk_reftag": false, 00:25:39.796 "prchk_guard": false, 00:25:39.796 "hdgst": false, 00:25:39.796 "ddgst": false, 00:25:39.796 "dhchap_key": "key2", 00:25:39.796 "allow_unrecognized_csi": false, 00:25:39.796 "method": "bdev_nvme_attach_controller", 00:25:39.796 "req_id": 1 00:25:39.796 } 00:25:39.796 Got JSON-RPC error response 00:25:39.796 response: 00:25:39.796 { 00:25:39.796 "code": -5, 00:25:39.796 "message": "Input/output error" 00:25:39.796 } 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.796 request: 00:25:39.796 { 00:25:39.796 "name": "nvme0", 00:25:39.796 "trtype": "tcp", 00:25:39.796 "traddr": "10.0.0.1", 00:25:39.796 "adrfam": "ipv4", 00:25:39.796 "trsvcid": "4420", 00:25:39.796 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:39.796 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:39.796 "prchk_reftag": false, 00:25:39.796 "prchk_guard": false, 00:25:39.796 "hdgst": false, 00:25:39.796 "ddgst": false, 00:25:39.796 "dhchap_key": "key1", 00:25:39.796 "dhchap_ctrlr_key": "ckey2", 00:25:39.796 "allow_unrecognized_csi": false, 00:25:39.796 "method": "bdev_nvme_attach_controller", 00:25:39.796 "req_id": 1 00:25:39.796 } 00:25:39.796 Got JSON-RPC error response 00:25:39.796 response: 00:25:39.796 { 00:25:39.796 "code": -5, 00:25:39.796 "message": "Input/output error" 00:25:39.796 } 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:39.796 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.054 nvme0n1 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@132 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: ]] 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@133 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # jq -r '.[].name' 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@134 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@136 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.054 request: 00:25:40.054 { 00:25:40.054 "name": "nvme0", 00:25:40.054 "dhchap_key": "key1", 00:25:40.054 "dhchap_ctrlr_key": "ckey2", 00:25:40.054 "method": "bdev_nvme_set_keys", 00:25:40.054 "req_id": 1 00:25:40.054 } 00:25:40.054 Got JSON-RPC error response 00:25:40.054 response: 00:25:40.054 { 00:25:40.054 "code": -13, 00:25:40.054 "message": "Permission denied" 00:25:40.054 } 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:40.054 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:40.312 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.312 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:40.312 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:40.312 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.312 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:40.312 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:40.312 10:43:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:41.244 10:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.244 10:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:41.244 10:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:41.244 10:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.244 10:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:41.244 10:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 1 != 0 )) 00:25:41.244 10:43:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@138 -- # sleep 1s 00:25:42.176 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.176 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # jq length 00:25:42.176 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.176 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.176 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@137 -- # (( 0 != 0 )) 00:25:42.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@141 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:42.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:42.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:42.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDkxNTFiOWVhNjgyZWQ1OTM4NDZjNjU5NjZlNWZlZmRiNjJlOWQ2Y2E3OTU5YWE3QOxTlg==: 00:25:42.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: ]] 00:25:42.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:ZGVmM2Y1MDE2ZWNhYzBiNTRmZTU5NjcwZjg2YTkyZWEyZGExZDFmMDJkYTdlZTg5ReAwPg==: 00:25:42.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@142 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 --ctrlr-loss-timeout-sec 1 --reconnect-delay-sec 1 00:25:42.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.434 10:43:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.434 nvme0n1 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@146 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YzcyYzI5ZWNlMGNlMTIyOWZiMzI5MGM3MmIwNzI1YzFJU7p2: 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: ]] 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:YmZjZGJhOGIxMTc1NGU4N2RhY2ZlNDBiOTgxYjkxNzcHkOsP: 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@147 -- # NOT rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@652 -- # local es=0 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # rpc_cmd bdev_nvme_set_keys nvme0 --dhchap-key key2 --dhchap-ctrlr-key ckey1 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.434 request: 00:25:42.434 { 00:25:42.434 "name": "nvme0", 00:25:42.434 "dhchap_key": "key2", 00:25:42.434 "dhchap_ctrlr_key": "ckey1", 00:25:42.434 "method": "bdev_nvme_set_keys", 00:25:42.434 "req_id": 1 00:25:42.434 } 00:25:42.434 Got JSON-RPC error response 00:25:42.434 response: 00:25:42.434 { 00:25:42.434 "code": -13, 00:25:42.434 "message": "Permission denied" 00:25:42.434 } 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@655 -- # es=1 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.434 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:42.691 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 1 != 0 )) 00:25:42.691 10:43:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@149 -- # sleep 1s 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # jq length 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@563 -- # xtrace_disable 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@148 -- # (( 0 != 0 )) 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@152 -- # trap - SIGINT SIGTERM EXIT 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@153 -- # cleanup 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # nvmfcleanup 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@99 -- # sync 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@102 -- # set +e 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@103 -- # for i in {1..20} 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:25:43.620 rmmod nvme_tcp 00:25:43.620 rmmod nvme_fabrics 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@106 -- # set -e 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@107 -- # return 0 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@336 -- # '[' -n 3343620 ']' 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@337 -- # killprocess 3343620 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # '[' -z 3343620 ']' 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # kill -0 3343620 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # uname 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3343620 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3343620' 00:25:43.620 killing process with pid 3343620 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@973 -- # kill 3343620 00:25:43.620 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@978 -- # wait 3343620 00:25:43.878 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:25:43.878 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # nvmf_fini 00:25:43.878 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@264 -- # local dev 00:25:43.878 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@267 -- # remove_target_ns 00:25:43.878 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:43.878 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:43.878 10:43:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@268 -- # delete_main_bridge 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@130 -- # return 0 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # _dev=0 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@41 -- # dev_map=() 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/setup.sh@284 -- # iptr 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@542 -- # iptables-save 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@542 -- # iptables-restore 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # echo 0 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:25:46.413 10:43:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:48.947 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:48.947 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:48.947 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:48.947 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:48.947 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:48.947 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:48.947 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:48.947 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:48.947 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:48.947 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:48.947 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:48.947 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:48.947 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:48.947 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:48.947 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:48.947 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:50.322 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:50.322 10:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.EBu /tmp/spdk.key-null.u8P /tmp/spdk.key-sha256.Edf /tmp/spdk.key-sha384.RpU /tmp/spdk.key-sha512.Rol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:25:50.322 10:43:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:53.609 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:25:53.609 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:25:53.609 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:25:53.609 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:25:53.609 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:25:53.609 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:25:53.609 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:25:53.609 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:25:53.609 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:25:53.609 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:25:53.609 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:25:53.609 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:25:53.609 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:25:53.609 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:25:53.609 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:25:53.609 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:25:53.609 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:25:53.609 00:25:53.609 real 0m54.285s 00:25:53.609 user 0m48.062s 00:25:53.609 sys 0m12.571s 00:25:53.609 10:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:53.609 10:43:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.609 ************************************ 00:25:53.609 END TEST nvmf_auth_host 00:25:53.609 ************************************ 00:25:53.609 10:43:33 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:53.609 10:43:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:25:53.609 10:43:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:53.609 10:43:33 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.609 ************************************ 00:25:53.609 START TEST nvmf_bdevperf 00:25:53.609 ************************************ 00:25:53.609 10:43:33 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:25:53.609 * Looking for test storage... 00:25:53.609 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lcov --version 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@345 -- # : 1 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=1 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 1 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@353 -- # local d=2 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@355 -- # echo 2 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@368 -- # return 0 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:25:53.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.609 --rc genhtml_branch_coverage=1 00:25:53.609 --rc genhtml_function_coverage=1 00:25:53.609 --rc genhtml_legend=1 00:25:53.609 --rc geninfo_all_blocks=1 00:25:53.609 --rc geninfo_unexecuted_blocks=1 00:25:53.609 00:25:53.609 ' 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:25:53.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.609 --rc genhtml_branch_coverage=1 00:25:53.609 --rc genhtml_function_coverage=1 00:25:53.609 --rc genhtml_legend=1 00:25:53.609 --rc geninfo_all_blocks=1 00:25:53.609 --rc geninfo_unexecuted_blocks=1 00:25:53.609 00:25:53.609 ' 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:25:53.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.609 --rc genhtml_branch_coverage=1 00:25:53.609 --rc genhtml_function_coverage=1 00:25:53.609 --rc genhtml_legend=1 00:25:53.609 --rc geninfo_all_blocks=1 00:25:53.609 --rc geninfo_unexecuted_blocks=1 00:25:53.609 00:25:53.609 ' 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:25:53.609 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:53.609 --rc genhtml_branch_coverage=1 00:25:53.609 --rc genhtml_function_coverage=1 00:25:53.609 --rc genhtml_legend=1 00:25:53.609 --rc geninfo_all_blocks=1 00:25:53.609 --rc geninfo_unexecuted_blocks=1 00:25:53.609 00:25:53.609 ' 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:25:53.609 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@15 -- # shopt -s extglob 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@50 -- # : 0 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:25:53.610 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@54 -- # have_pci_nics=0 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # prepare_net_devs 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # local -g is_hw=no 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # remove_target_ns 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # xtrace_disable 00:25:53.610 10:43:34 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.178 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@131 -- # pci_devs=() 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@135 -- # net_devs=() 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@136 -- # e810=() 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@136 -- # local -ga e810 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@137 -- # x722=() 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@137 -- # local -ga x722 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@138 -- # mlx=() 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@138 -- # local -ga mlx 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:00.179 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:00.179 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:00.179 Found net devices under 0000:86:00.0: cvl_0_0 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:00.179 Found net devices under 0000:86:00.1: cvl_0_1 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@262 -- # is_hw=yes 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@257 -- # create_target_ns 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@28 -- # local -g _dev 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@44 -- # ips=() 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:00.179 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:26:00.180 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:26:00.180 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:26:00.180 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:26:00.180 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:26:00.180 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:26:00.180 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:26:00.180 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:26:00.180 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:26:00.180 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:26:00.180 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:26:00.180 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:26:00.180 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:00.180 10:43:39 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@11 -- # local val=167772161 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:00.180 10.0.0.1 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@11 -- # local val=167772162 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:00.180 10.0.0.2 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:00.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:00.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.423 ms 00:26:00.180 00:26:00.180 --- 10.0.0.1 ping statistics --- 00:26:00.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.180 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=target0 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:00.180 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:26:00.181 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:00.181 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.139 ms 00:26:00.181 00:26:00.181 --- 10.0.0.2 ping statistics --- 00:26:00.181 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:00.181 rtt min/avg/max/mdev = 0.139/0.139/0.139/0.000 ms 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair++ )) 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # return 0 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=initiator1 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # return 1 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev= 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@169 -- # return 0 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=target0 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # get_net_dev target1 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@107 -- # local dev=target1 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@109 -- # return 1 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@168 -- # dev= 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@169 -- # return 0 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # nvmfpid=3357329 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # waitforlisten 3357329 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3357329 ']' 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:00.181 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:00.181 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.181 [2024-11-20 10:43:40.350355] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:26:00.182 [2024-11-20 10:43:40.350405] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:00.182 [2024-11-20 10:43:40.428644] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:00.182 [2024-11-20 10:43:40.471341] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:00.182 [2024-11-20 10:43:40.471377] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:00.182 [2024-11-20 10:43:40.471384] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:00.182 [2024-11-20 10:43:40.471392] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:00.182 [2024-11-20 10:43:40.471397] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:00.182 [2024-11-20 10:43:40.472856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:00.182 [2024-11-20 10:43:40.472964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:00.182 [2024-11-20 10:43:40.472964] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.182 [2024-11-20 10:43:40.609254] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.182 Malloc0 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:00.182 [2024-11-20 10:43:40.670342] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # config=() 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # local subsystem config 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:26:00.182 { 00:26:00.182 "params": { 00:26:00.182 "name": "Nvme$subsystem", 00:26:00.182 "trtype": "$TEST_TRANSPORT", 00:26:00.182 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:00.182 "adrfam": "ipv4", 00:26:00.182 "trsvcid": "$NVMF_PORT", 00:26:00.182 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:00.182 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:00.182 "hdgst": ${hdgst:-false}, 00:26:00.182 "ddgst": ${ddgst:-false} 00:26:00.182 }, 00:26:00.182 "method": "bdev_nvme_attach_controller" 00:26:00.182 } 00:26:00.182 EOF 00:26:00.182 )") 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # cat 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # jq . 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@397 -- # IFS=, 00:26:00.182 10:43:40 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:26:00.182 "params": { 00:26:00.182 "name": "Nvme1", 00:26:00.182 "trtype": "tcp", 00:26:00.182 "traddr": "10.0.0.2", 00:26:00.182 "adrfam": "ipv4", 00:26:00.182 "trsvcid": "4420", 00:26:00.182 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:00.182 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:00.182 "hdgst": false, 00:26:00.182 "ddgst": false 00:26:00.182 }, 00:26:00.182 "method": "bdev_nvme_attach_controller" 00:26:00.182 }' 00:26:00.182 [2024-11-20 10:43:40.720534] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:26:00.182 [2024-11-20 10:43:40.720574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3357351 ] 00:26:00.182 [2024-11-20 10:43:40.793696] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:00.182 [2024-11-20 10:43:40.834676] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:00.440 Running I/O for 1 seconds... 00:26:01.374 11355.00 IOPS, 44.36 MiB/s 00:26:01.374 Latency(us) 00:26:01.374 [2024-11-20T09:43:42.105Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:01.374 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:01.374 Verification LBA range: start 0x0 length 0x4000 00:26:01.374 Nvme1n1 : 1.00 11426.29 44.63 0.00 0.00 11160.68 1209.30 14730.00 00:26:01.374 [2024-11-20T09:43:42.105Z] =================================================================================================================== 00:26:01.374 [2024-11-20T09:43:42.105Z] Total : 11426.29 44.63 0.00 0.00 11160.68 1209.30 14730.00 00:26:01.632 10:43:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=3357590 00:26:01.632 10:43:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:01.632 10:43:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:01.632 10:43:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:01.632 10:43:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # config=() 00:26:01.632 10:43:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # local subsystem config 00:26:01.632 10:43:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:26:01.632 10:43:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:26:01.632 { 00:26:01.632 "params": { 00:26:01.632 "name": "Nvme$subsystem", 00:26:01.632 "trtype": "$TEST_TRANSPORT", 00:26:01.632 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:01.632 "adrfam": "ipv4", 00:26:01.632 "trsvcid": "$NVMF_PORT", 00:26:01.632 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:01.632 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:01.632 "hdgst": ${hdgst:-false}, 00:26:01.632 "ddgst": ${ddgst:-false} 00:26:01.632 }, 00:26:01.632 "method": "bdev_nvme_attach_controller" 00:26:01.632 } 00:26:01.632 EOF 00:26:01.632 )") 00:26:01.632 10:43:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # cat 00:26:01.632 10:43:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@396 -- # jq . 00:26:01.632 10:43:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@397 -- # IFS=, 00:26:01.632 10:43:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:26:01.632 "params": { 00:26:01.632 "name": "Nvme1", 00:26:01.632 "trtype": "tcp", 00:26:01.632 "traddr": "10.0.0.2", 00:26:01.632 "adrfam": "ipv4", 00:26:01.632 "trsvcid": "4420", 00:26:01.632 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:01.632 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:01.632 "hdgst": false, 00:26:01.632 "ddgst": false 00:26:01.632 }, 00:26:01.632 "method": "bdev_nvme_attach_controller" 00:26:01.632 }' 00:26:01.632 [2024-11-20 10:43:42.200853] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:26:01.632 [2024-11-20 10:43:42.200906] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3357590 ] 00:26:01.632 [2024-11-20 10:43:42.277831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:01.632 [2024-11-20 10:43:42.315630] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:02.199 Running I/O for 15 seconds... 00:26:04.066 11212.00 IOPS, 43.80 MiB/s [2024-11-20T09:43:45.365Z] 11412.00 IOPS, 44.58 MiB/s [2024-11-20T09:43:45.365Z] 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 3357329 00:26:04.634 10:43:45 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:26:04.634 [2024-11-20 10:43:45.169971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:99536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.634 [2024-11-20 10:43:45.170015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.634 [2024-11-20 10:43:45.170033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:99544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.634 [2024-11-20 10:43:45.170041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.634 [2024-11-20 10:43:45.170051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:99552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.634 [2024-11-20 10:43:45.170058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.634 [2024-11-20 10:43:45.170066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:99560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.634 [2024-11-20 10:43:45.170074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.634 [2024-11-20 10:43:45.170083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:99568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.634 [2024-11-20 10:43:45.170090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.634 [2024-11-20 10:43:45.170099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:99576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.634 [2024-11-20 10:43:45.170106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.634 [2024-11-20 10:43:45.170115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:99584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.634 [2024-11-20 10:43:45.170122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.634 [2024-11-20 10:43:45.170131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:99592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.634 [2024-11-20 10:43:45.170137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.634 [2024-11-20 10:43:45.170148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:99600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.634 [2024-11-20 10:43:45.170155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:99608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:99616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:99624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:99632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:99640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:98808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.635 [2024-11-20 10:43:45.170391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:98816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.635 [2024-11-20 10:43:45.170406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:98824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.635 [2024-11-20 10:43:45.170424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:98832 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.635 [2024-11-20 10:43:45.170441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170451] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:98840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.635 [2024-11-20 10:43:45.170458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:98848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.635 [2024-11-20 10:43:45.170475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:98856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.635 [2024-11-20 10:43:45.170490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:99648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:99656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:99664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:99672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:99680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:99688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:99696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:99704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:99712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:99720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:99728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:99736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:99744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:99752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:99760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:99768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:99776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:99784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:99792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:99800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:99808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.635 [2024-11-20 10:43:45.170797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:98864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.635 [2024-11-20 10:43:45.170811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:98872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.635 [2024-11-20 10:43:45.170825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:98880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.635 [2024-11-20 10:43:45.170841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:98888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.635 [2024-11-20 10:43:45.170856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:98896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.635 [2024-11-20 10:43:45.170870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.635 [2024-11-20 10:43:45.170878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:98904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.635 [2024-11-20 10:43:45.170884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.170893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:98912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.170900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.170909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:98920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.170918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.170927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:98928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.170933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.170941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:98936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.170948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.170956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:98944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.170963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.170971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:98952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.170978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.170985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:98960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.170992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:98968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:98976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:98984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171044] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:98992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:99000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:99008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:99016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:99024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:99816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.636 [2024-11-20 10:43:45.171125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:99032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:99040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:99048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:99056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:99064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:99072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:99080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:99088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:99096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:99104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:99112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:99120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:99128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:99136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:99144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:99824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:04.636 [2024-11-20 10:43:45.171366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:99152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:99160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:99168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:99176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:99184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:99192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:99200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.636 [2024-11-20 10:43:45.171470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.636 [2024-11-20 10:43:45.171478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:99208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:99216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:99224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:99232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:99240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:99248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:99256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:99264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:99272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171600] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:99280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:99288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:99296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:99304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:99312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:99320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:99328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:99336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:99344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:99352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:99360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:99368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:99376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:99384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:99392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:99400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:99408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:99416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:99424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:99432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:99440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:99448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:99456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:99464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:99472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.171991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:99480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.171997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.172005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:99488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.172012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.172020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:99496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.172028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.172036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:99504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.172042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.172050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:99512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.172056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.637 [2024-11-20 10:43:45.172065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:99520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:04.637 [2024-11-20 10:43:45.172071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.638 [2024-11-20 10:43:45.172079] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1085e10 is same with the state(6) to be set 00:26:04.638 [2024-11-20 10:43:45.172087] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:26:04.638 [2024-11-20 10:43:45.172093] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:26:04.638 [2024-11-20 10:43:45.172098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:99528 len:8 PRP1 0x0 PRP2 0x0 00:26:04.638 [2024-11-20 10:43:45.172106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:04.638 [2024-11-20 10:43:45.174914] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.638 [2024-11-20 10:43:45.174968] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.638 [2024-11-20 10:43:45.175512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.638 [2024-11-20 10:43:45.175530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.638 [2024-11-20 10:43:45.175539] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.638 [2024-11-20 10:43:45.175711] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.638 [2024-11-20 10:43:45.175884] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.638 [2024-11-20 10:43:45.175893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.638 [2024-11-20 10:43:45.175902] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.638 [2024-11-20 10:43:45.175909] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.638 [2024-11-20 10:43:45.188183] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.638 [2024-11-20 10:43:45.188630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.638 [2024-11-20 10:43:45.188649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.638 [2024-11-20 10:43:45.188658] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.638 [2024-11-20 10:43:45.188831] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.638 [2024-11-20 10:43:45.189004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.638 [2024-11-20 10:43:45.189018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.638 [2024-11-20 10:43:45.189024] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.638 [2024-11-20 10:43:45.189032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.638 [2024-11-20 10:43:45.201026] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.638 [2024-11-20 10:43:45.201435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.638 [2024-11-20 10:43:45.201483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.638 [2024-11-20 10:43:45.201507] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.638 [2024-11-20 10:43:45.202086] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.638 [2024-11-20 10:43:45.202670] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.638 [2024-11-20 10:43:45.202681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.638 [2024-11-20 10:43:45.202687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.638 [2024-11-20 10:43:45.202694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.638 [2024-11-20 10:43:45.213822] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.638 [2024-11-20 10:43:45.214176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.638 [2024-11-20 10:43:45.214195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.638 [2024-11-20 10:43:45.214209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.638 [2024-11-20 10:43:45.214391] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.638 [2024-11-20 10:43:45.214559] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.638 [2024-11-20 10:43:45.214569] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.638 [2024-11-20 10:43:45.214576] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.638 [2024-11-20 10:43:45.214582] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.638 [2024-11-20 10:43:45.226657] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.638 [2024-11-20 10:43:45.227071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.638 [2024-11-20 10:43:45.227088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.638 [2024-11-20 10:43:45.227096] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.638 [2024-11-20 10:43:45.227277] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.638 [2024-11-20 10:43:45.227445] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.638 [2024-11-20 10:43:45.227455] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.638 [2024-11-20 10:43:45.227462] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.638 [2024-11-20 10:43:45.227468] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.638 [2024-11-20 10:43:45.239499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.638 [2024-11-20 10:43:45.239911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.638 [2024-11-20 10:43:45.239928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.638 [2024-11-20 10:43:45.239935] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.638 [2024-11-20 10:43:45.240093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.638 [2024-11-20 10:43:45.240274] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.638 [2024-11-20 10:43:45.240285] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.638 [2024-11-20 10:43:45.240291] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.638 [2024-11-20 10:43:45.240298] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.638 [2024-11-20 10:43:45.252349] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.638 [2024-11-20 10:43:45.252772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.638 [2024-11-20 10:43:45.252817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.638 [2024-11-20 10:43:45.252841] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.638 [2024-11-20 10:43:45.253430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.638 [2024-11-20 10:43:45.253876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.638 [2024-11-20 10:43:45.253885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.638 [2024-11-20 10:43:45.253892] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.638 [2024-11-20 10:43:45.253898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.638 [2024-11-20 10:43:45.267610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.638 [2024-11-20 10:43:45.268104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.638 [2024-11-20 10:43:45.268129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.638 [2024-11-20 10:43:45.268140] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.638 [2024-11-20 10:43:45.268401] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.638 [2024-11-20 10:43:45.268658] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.638 [2024-11-20 10:43:45.268671] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.638 [2024-11-20 10:43:45.268680] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.638 [2024-11-20 10:43:45.268690] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.638 [2024-11-20 10:43:45.280574] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.638 [2024-11-20 10:43:45.280937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.638 [2024-11-20 10:43:45.280959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.638 [2024-11-20 10:43:45.280967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.638 [2024-11-20 10:43:45.281134] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.638 [2024-11-20 10:43:45.281310] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.638 [2024-11-20 10:43:45.281320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.638 [2024-11-20 10:43:45.281327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.638 [2024-11-20 10:43:45.281334] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.639 [2024-11-20 10:43:45.293619] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.639 [2024-11-20 10:43:45.294048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.639 [2024-11-20 10:43:45.294065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.639 [2024-11-20 10:43:45.294073] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.639 [2024-11-20 10:43:45.294250] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.639 [2024-11-20 10:43:45.294423] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.639 [2024-11-20 10:43:45.294433] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.639 [2024-11-20 10:43:45.294440] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.639 [2024-11-20 10:43:45.294447] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.639 [2024-11-20 10:43:45.306578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.639 [2024-11-20 10:43:45.307030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.639 [2024-11-20 10:43:45.307074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.639 [2024-11-20 10:43:45.307097] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.639 [2024-11-20 10:43:45.307606] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.639 [2024-11-20 10:43:45.307775] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.639 [2024-11-20 10:43:45.307786] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.639 [2024-11-20 10:43:45.307793] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.639 [2024-11-20 10:43:45.307800] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.639 [2024-11-20 10:43:45.319462] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.639 [2024-11-20 10:43:45.319864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.639 [2024-11-20 10:43:45.319883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.639 [2024-11-20 10:43:45.319890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.639 [2024-11-20 10:43:45.320061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.639 [2024-11-20 10:43:45.320235] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.639 [2024-11-20 10:43:45.320245] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.639 [2024-11-20 10:43:45.320252] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.639 [2024-11-20 10:43:45.320259] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.639 [2024-11-20 10:43:45.332325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.639 [2024-11-20 10:43:45.332612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.639 [2024-11-20 10:43:45.332630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.639 [2024-11-20 10:43:45.332637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.639 [2024-11-20 10:43:45.332803] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.639 [2024-11-20 10:43:45.332978] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.639 [2024-11-20 10:43:45.332988] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.639 [2024-11-20 10:43:45.332994] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.639 [2024-11-20 10:43:45.333001] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.639 [2024-11-20 10:43:45.345311] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.639 [2024-11-20 10:43:45.345669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.639 [2024-11-20 10:43:45.345688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.639 [2024-11-20 10:43:45.345696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.639 [2024-11-20 10:43:45.345867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.639 [2024-11-20 10:43:45.346042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.639 [2024-11-20 10:43:45.346052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.639 [2024-11-20 10:43:45.346060] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.639 [2024-11-20 10:43:45.346067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.639 [2024-11-20 10:43:45.358305] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.899 [2024-11-20 10:43:45.358680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.899 [2024-11-20 10:43:45.358702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.899 [2024-11-20 10:43:45.358711] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.899 [2024-11-20 10:43:45.358885] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.899 [2024-11-20 10:43:45.359062] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.899 [2024-11-20 10:43:45.359072] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.899 [2024-11-20 10:43:45.359084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.899 [2024-11-20 10:43:45.359092] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.899 [2024-11-20 10:43:45.371283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.899 [2024-11-20 10:43:45.371575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.899 [2024-11-20 10:43:45.371594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.899 [2024-11-20 10:43:45.371603] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.899 [2024-11-20 10:43:45.371772] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.899 [2024-11-20 10:43:45.371939] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.899 [2024-11-20 10:43:45.371949] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.899 [2024-11-20 10:43:45.371956] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.899 [2024-11-20 10:43:45.371963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.899 [2024-11-20 10:43:45.384082] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.899 [2024-11-20 10:43:45.384504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.899 [2024-11-20 10:43:45.384551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.899 [2024-11-20 10:43:45.384577] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.899 [2024-11-20 10:43:45.385092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.899 [2024-11-20 10:43:45.385493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.899 [2024-11-20 10:43:45.385514] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.899 [2024-11-20 10:43:45.385529] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.899 [2024-11-20 10:43:45.385544] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.899 [2024-11-20 10:43:45.398965] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.899 [2024-11-20 10:43:45.399406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.899 [2024-11-20 10:43:45.399429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.899 [2024-11-20 10:43:45.399440] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.899 [2024-11-20 10:43:45.399693] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.899 [2024-11-20 10:43:45.399947] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.899 [2024-11-20 10:43:45.399960] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.899 [2024-11-20 10:43:45.399970] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.899 [2024-11-20 10:43:45.399980] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.899 [2024-11-20 10:43:45.411979] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.899 [2024-11-20 10:43:45.412326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.899 [2024-11-20 10:43:45.412345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.899 [2024-11-20 10:43:45.412353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.899 [2024-11-20 10:43:45.412525] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.899 [2024-11-20 10:43:45.412697] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.899 [2024-11-20 10:43:45.412707] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.899 [2024-11-20 10:43:45.412714] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.899 [2024-11-20 10:43:45.412721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.899 [2024-11-20 10:43:45.424938] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.899 [2024-11-20 10:43:45.425324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.899 [2024-11-20 10:43:45.425342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.899 [2024-11-20 10:43:45.425349] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.899 [2024-11-20 10:43:45.425521] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.899 [2024-11-20 10:43:45.425694] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.899 [2024-11-20 10:43:45.425705] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.899 [2024-11-20 10:43:45.425712] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.899 [2024-11-20 10:43:45.425719] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.899 [2024-11-20 10:43:45.438092] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.899 [2024-11-20 10:43:45.438540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.899 [2024-11-20 10:43:45.438558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.899 [2024-11-20 10:43:45.438567] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.899 [2024-11-20 10:43:45.438751] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.899 [2024-11-20 10:43:45.438935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.899 [2024-11-20 10:43:45.438945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.899 [2024-11-20 10:43:45.438952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.899 [2024-11-20 10:43:45.438959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.899 [2024-11-20 10:43:45.451403] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.899 [2024-11-20 10:43:45.451776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.899 [2024-11-20 10:43:45.451794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.899 [2024-11-20 10:43:45.451805] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.899 [2024-11-20 10:43:45.451978] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.899 [2024-11-20 10:43:45.452150] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.900 [2024-11-20 10:43:45.452160] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.900 [2024-11-20 10:43:45.452167] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.900 [2024-11-20 10:43:45.452174] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.900 [2024-11-20 10:43:45.464487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.900 [2024-11-20 10:43:45.464854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.900 [2024-11-20 10:43:45.464872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.900 [2024-11-20 10:43:45.464880] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.900 [2024-11-20 10:43:45.465062] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.900 [2024-11-20 10:43:45.465253] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.900 [2024-11-20 10:43:45.465264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.900 [2024-11-20 10:43:45.465271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.900 [2024-11-20 10:43:45.465278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.900 [2024-11-20 10:43:45.477452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.900 [2024-11-20 10:43:45.477860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.900 [2024-11-20 10:43:45.477878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.900 [2024-11-20 10:43:45.477886] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.900 [2024-11-20 10:43:45.478057] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.900 [2024-11-20 10:43:45.478284] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.900 [2024-11-20 10:43:45.478295] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.900 [2024-11-20 10:43:45.478302] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.900 [2024-11-20 10:43:45.478310] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.900 [2024-11-20 10:43:45.490719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.900 [2024-11-20 10:43:45.491088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.900 [2024-11-20 10:43:45.491107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.900 [2024-11-20 10:43:45.491115] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.900 [2024-11-20 10:43:45.491304] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.900 [2024-11-20 10:43:45.491491] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.900 [2024-11-20 10:43:45.491501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.900 [2024-11-20 10:43:45.491509] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.900 [2024-11-20 10:43:45.491515] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.900 [2024-11-20 10:43:45.503910] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.900 [2024-11-20 10:43:45.504293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.900 [2024-11-20 10:43:45.504311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.900 [2024-11-20 10:43:45.504321] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.900 [2024-11-20 10:43:45.504508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.900 [2024-11-20 10:43:45.504680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.900 [2024-11-20 10:43:45.504691] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.900 [2024-11-20 10:43:45.504698] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.900 [2024-11-20 10:43:45.504705] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.900 [2024-11-20 10:43:45.516918] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.900 [2024-11-20 10:43:45.517213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.900 [2024-11-20 10:43:45.517231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.900 [2024-11-20 10:43:45.517238] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.900 [2024-11-20 10:43:45.517410] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.900 [2024-11-20 10:43:45.517582] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.900 [2024-11-20 10:43:45.517591] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.900 [2024-11-20 10:43:45.517598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.900 [2024-11-20 10:43:45.517604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.900 [2024-11-20 10:43:45.529877] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.900 [2024-11-20 10:43:45.530309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.900 [2024-11-20 10:43:45.530328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.900 [2024-11-20 10:43:45.530336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.900 [2024-11-20 10:43:45.530508] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.900 [2024-11-20 10:43:45.530680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.900 [2024-11-20 10:43:45.530690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.900 [2024-11-20 10:43:45.530701] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.900 [2024-11-20 10:43:45.530708] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.900 [2024-11-20 10:43:45.543086] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.900 [2024-11-20 10:43:45.543543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.900 [2024-11-20 10:43:45.543561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.900 [2024-11-20 10:43:45.543569] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.900 [2024-11-20 10:43:45.543752] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.900 [2024-11-20 10:43:45.543936] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.900 [2024-11-20 10:43:45.543946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.900 [2024-11-20 10:43:45.543953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.900 [2024-11-20 10:43:45.543960] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.900 [2024-11-20 10:43:45.556326] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.900 [2024-11-20 10:43:45.556742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.900 [2024-11-20 10:43:45.556761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.900 [2024-11-20 10:43:45.556769] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.900 [2024-11-20 10:43:45.556951] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.900 [2024-11-20 10:43:45.557135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.900 [2024-11-20 10:43:45.557145] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.900 [2024-11-20 10:43:45.557152] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.900 [2024-11-20 10:43:45.557159] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.900 [2024-11-20 10:43:45.569452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.900 [2024-11-20 10:43:45.569885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.900 [2024-11-20 10:43:45.569903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.900 [2024-11-20 10:43:45.569912] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.900 [2024-11-20 10:43:45.570094] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.900 [2024-11-20 10:43:45.570283] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.900 [2024-11-20 10:43:45.570294] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.900 [2024-11-20 10:43:45.570301] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.900 [2024-11-20 10:43:45.570309] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.900 [2024-11-20 10:43:45.582689] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.900 [2024-11-20 10:43:45.583054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.900 [2024-11-20 10:43:45.583073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.900 [2024-11-20 10:43:45.583081] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.900 [2024-11-20 10:43:45.583268] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.901 [2024-11-20 10:43:45.583452] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.901 [2024-11-20 10:43:45.583462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.901 [2024-11-20 10:43:45.583469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.901 [2024-11-20 10:43:45.583476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.901 [2024-11-20 10:43:45.595868] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.901 [2024-11-20 10:43:45.596232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.901 [2024-11-20 10:43:45.596250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.901 [2024-11-20 10:43:45.596258] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.901 [2024-11-20 10:43:45.596430] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.901 [2024-11-20 10:43:45.596604] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.901 [2024-11-20 10:43:45.596614] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.901 [2024-11-20 10:43:45.596621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.901 [2024-11-20 10:43:45.596629] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.901 [2024-11-20 10:43:45.608999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.901 [2024-11-20 10:43:45.609358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.901 [2024-11-20 10:43:45.609377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.901 [2024-11-20 10:43:45.609384] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.901 [2024-11-20 10:43:45.609566] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.901 [2024-11-20 10:43:45.609751] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.901 [2024-11-20 10:43:45.609761] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.901 [2024-11-20 10:43:45.609768] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.901 [2024-11-20 10:43:45.609775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:04.901 [2024-11-20 10:43:45.622274] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:04.901 [2024-11-20 10:43:45.622722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:04.901 [2024-11-20 10:43:45.622742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:04.901 [2024-11-20 10:43:45.622757] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:04.901 [2024-11-20 10:43:45.622942] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:04.901 [2024-11-20 10:43:45.623146] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:04.901 [2024-11-20 10:43:45.623161] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:04.901 [2024-11-20 10:43:45.623169] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:04.901 [2024-11-20 10:43:45.623177] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.161 [2024-11-20 10:43:45.635445] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.161 [2024-11-20 10:43:45.635819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.161 [2024-11-20 10:43:45.635839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.161 [2024-11-20 10:43:45.635847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.161 [2024-11-20 10:43:45.636031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.161 [2024-11-20 10:43:45.636221] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.161 [2024-11-20 10:43:45.636232] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.161 [2024-11-20 10:43:45.636240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.161 [2024-11-20 10:43:45.636246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.161 9578.33 IOPS, 37.42 MiB/s [2024-11-20T09:43:45.892Z] [2024-11-20 10:43:45.649966] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.161 [2024-11-20 10:43:45.650340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.161 [2024-11-20 10:43:45.650360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.161 [2024-11-20 10:43:45.650369] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.161 [2024-11-20 10:43:45.650552] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.161 [2024-11-20 10:43:45.650736] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.161 [2024-11-20 10:43:45.650747] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.161 [2024-11-20 10:43:45.650753] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.161 [2024-11-20 10:43:45.650760] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.161 [2024-11-20 10:43:45.663023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.161 [2024-11-20 10:43:45.663456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.161 [2024-11-20 10:43:45.663474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.161 [2024-11-20 10:43:45.663482] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.161 [2024-11-20 10:43:45.663655] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.161 [2024-11-20 10:43:45.663832] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.161 [2024-11-20 10:43:45.663842] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.161 [2024-11-20 10:43:45.663848] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.161 [2024-11-20 10:43:45.663856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.161 [2024-11-20 10:43:45.676194] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.161 [2024-11-20 10:43:45.676609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.161 [2024-11-20 10:43:45.676628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.161 [2024-11-20 10:43:45.676636] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.161 [2024-11-20 10:43:45.676820] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.161 [2024-11-20 10:43:45.677004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.161 [2024-11-20 10:43:45.677014] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.161 [2024-11-20 10:43:45.677023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.161 [2024-11-20 10:43:45.677031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.161 [2024-11-20 10:43:45.689611] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.161 [2024-11-20 10:43:45.689979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.161 [2024-11-20 10:43:45.689997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.161 [2024-11-20 10:43:45.690005] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.161 [2024-11-20 10:43:45.690199] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.161 [2024-11-20 10:43:45.690412] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.161 [2024-11-20 10:43:45.690423] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.161 [2024-11-20 10:43:45.690432] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.161 [2024-11-20 10:43:45.690440] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.161 [2024-11-20 10:43:45.702573] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.161 [2024-11-20 10:43:45.703022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.161 [2024-11-20 10:43:45.703041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.161 [2024-11-20 10:43:45.703049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.161 [2024-11-20 10:43:45.703227] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.161 [2024-11-20 10:43:45.703400] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.161 [2024-11-20 10:43:45.703410] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.161 [2024-11-20 10:43:45.703424] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.161 [2024-11-20 10:43:45.703431] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.161 [2024-11-20 10:43:45.715663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.161 [2024-11-20 10:43:45.716083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.161 [2024-11-20 10:43:45.716129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.161 [2024-11-20 10:43:45.716153] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.161 [2024-11-20 10:43:45.716590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.161 [2024-11-20 10:43:45.716764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.161 [2024-11-20 10:43:45.716774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.161 [2024-11-20 10:43:45.716781] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.161 [2024-11-20 10:43:45.716788] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.161 [2024-11-20 10:43:45.728503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.161 [2024-11-20 10:43:45.728923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.161 [2024-11-20 10:43:45.728970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.161 [2024-11-20 10:43:45.728994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.161 [2024-11-20 10:43:45.729587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.161 [2024-11-20 10:43:45.729786] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.161 [2024-11-20 10:43:45.729796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.161 [2024-11-20 10:43:45.729802] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.162 [2024-11-20 10:43:45.729809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.162 [2024-11-20 10:43:45.741283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.162 [2024-11-20 10:43:45.741629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.162 [2024-11-20 10:43:45.741646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.162 [2024-11-20 10:43:45.741653] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.162 [2024-11-20 10:43:45.741811] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.162 [2024-11-20 10:43:45.741970] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.162 [2024-11-20 10:43:45.741979] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.162 [2024-11-20 10:43:45.741985] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.162 [2024-11-20 10:43:45.741992] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.162 [2024-11-20 10:43:45.754114] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.162 [2024-11-20 10:43:45.754528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.162 [2024-11-20 10:43:45.754546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.162 [2024-11-20 10:43:45.754554] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.162 [2024-11-20 10:43:45.754712] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.162 [2024-11-20 10:43:45.754871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.162 [2024-11-20 10:43:45.754880] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.162 [2024-11-20 10:43:45.754887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.162 [2024-11-20 10:43:45.754893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.162 [2024-11-20 10:43:45.766874] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.162 [2024-11-20 10:43:45.767284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.162 [2024-11-20 10:43:45.767327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.162 [2024-11-20 10:43:45.767353] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.162 [2024-11-20 10:43:45.767897] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.162 [2024-11-20 10:43:45.768057] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.162 [2024-11-20 10:43:45.768066] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.162 [2024-11-20 10:43:45.768072] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.162 [2024-11-20 10:43:45.768079] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.162 [2024-11-20 10:43:45.779631] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.162 [2024-11-20 10:43:45.780058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.162 [2024-11-20 10:43:45.780104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.162 [2024-11-20 10:43:45.780128] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.162 [2024-11-20 10:43:45.780721] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.162 [2024-11-20 10:43:45.781141] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.162 [2024-11-20 10:43:45.781151] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.162 [2024-11-20 10:43:45.781158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.162 [2024-11-20 10:43:45.781164] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.162 [2024-11-20 10:43:45.792396] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.162 [2024-11-20 10:43:45.792812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.162 [2024-11-20 10:43:45.792829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.162 [2024-11-20 10:43:45.792839] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.162 [2024-11-20 10:43:45.792998] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.162 [2024-11-20 10:43:45.793157] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.162 [2024-11-20 10:43:45.793166] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.162 [2024-11-20 10:43:45.793172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.162 [2024-11-20 10:43:45.793178] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.162 [2024-11-20 10:43:45.805166] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.162 [2024-11-20 10:43:45.805622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.162 [2024-11-20 10:43:45.805666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.162 [2024-11-20 10:43:45.805689] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.162 [2024-11-20 10:43:45.806176] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.162 [2024-11-20 10:43:45.806368] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.162 [2024-11-20 10:43:45.806379] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.162 [2024-11-20 10:43:45.806386] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.162 [2024-11-20 10:43:45.806393] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.162 [2024-11-20 10:43:45.817920] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.162 [2024-11-20 10:43:45.818330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.162 [2024-11-20 10:43:45.818348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.162 [2024-11-20 10:43:45.818355] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.162 [2024-11-20 10:43:45.818513] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.162 [2024-11-20 10:43:45.818672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.162 [2024-11-20 10:43:45.818681] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.162 [2024-11-20 10:43:45.818687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.162 [2024-11-20 10:43:45.818694] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.162 [2024-11-20 10:43:45.830787] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.162 [2024-11-20 10:43:45.831186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.162 [2024-11-20 10:43:45.831243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.162 [2024-11-20 10:43:45.831267] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.162 [2024-11-20 10:43:45.831844] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.162 [2024-11-20 10:43:45.832337] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.162 [2024-11-20 10:43:45.832347] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.162 [2024-11-20 10:43:45.832353] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.162 [2024-11-20 10:43:45.832360] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.162 [2024-11-20 10:43:45.843650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.162 [2024-11-20 10:43:45.844064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.162 [2024-11-20 10:43:45.844082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.162 [2024-11-20 10:43:45.844090] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.162 [2024-11-20 10:43:45.844254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.162 [2024-11-20 10:43:45.844415] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.162 [2024-11-20 10:43:45.844424] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.162 [2024-11-20 10:43:45.844431] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.162 [2024-11-20 10:43:45.844437] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.162 [2024-11-20 10:43:45.856378] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.162 [2024-11-20 10:43:45.856768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.162 [2024-11-20 10:43:45.856813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.162 [2024-11-20 10:43:45.856836] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.162 [2024-11-20 10:43:45.857359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.162 [2024-11-20 10:43:45.857528] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.162 [2024-11-20 10:43:45.857538] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.163 [2024-11-20 10:43:45.857544] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.163 [2024-11-20 10:43:45.857550] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.163 [2024-11-20 10:43:45.869189] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.163 [2024-11-20 10:43:45.869531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.163 [2024-11-20 10:43:45.869549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.163 [2024-11-20 10:43:45.869556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.163 [2024-11-20 10:43:45.869713] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.163 [2024-11-20 10:43:45.869871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.163 [2024-11-20 10:43:45.869881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.163 [2024-11-20 10:43:45.869891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.163 [2024-11-20 10:43:45.869898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.163 [2024-11-20 10:43:45.881936] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.163 [2024-11-20 10:43:45.882348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.163 [2024-11-20 10:43:45.882365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.163 [2024-11-20 10:43:45.882372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.163 [2024-11-20 10:43:45.882532] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.163 [2024-11-20 10:43:45.882729] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.163 [2024-11-20 10:43:45.882743] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.163 [2024-11-20 10:43:45.882750] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.163 [2024-11-20 10:43:45.882757] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.422 [2024-11-20 10:43:45.894839] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.422 [2024-11-20 10:43:45.895260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.422 [2024-11-20 10:43:45.895314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.422 [2024-11-20 10:43:45.895339] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.422 [2024-11-20 10:43:45.895922] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.422 [2024-11-20 10:43:45.896082] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.422 [2024-11-20 10:43:45.896092] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.422 [2024-11-20 10:43:45.896099] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.422 [2024-11-20 10:43:45.896106] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.422 [2024-11-20 10:43:45.907576] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.422 [2024-11-20 10:43:45.908004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.422 [2024-11-20 10:43:45.908051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.422 [2024-11-20 10:43:45.908075] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.422 [2024-11-20 10:43:45.908474] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.422 [2024-11-20 10:43:45.908643] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.422 [2024-11-20 10:43:45.908653] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.422 [2024-11-20 10:43:45.908660] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.422 [2024-11-20 10:43:45.908666] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.422 [2024-11-20 10:43:45.920381] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.422 [2024-11-20 10:43:45.920802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.422 [2024-11-20 10:43:45.920842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.422 [2024-11-20 10:43:45.920867] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.422 [2024-11-20 10:43:45.921443] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.422 [2024-11-20 10:43:45.921612] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.422 [2024-11-20 10:43:45.921623] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.422 [2024-11-20 10:43:45.921629] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.422 [2024-11-20 10:43:45.921635] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.422 [2024-11-20 10:43:45.933136] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.422 [2024-11-20 10:43:45.933530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.422 [2024-11-20 10:43:45.933548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.422 [2024-11-20 10:43:45.933556] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.422 [2024-11-20 10:43:45.933723] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.422 [2024-11-20 10:43:45.933893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.422 [2024-11-20 10:43:45.933903] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.422 [2024-11-20 10:43:45.933910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.422 [2024-11-20 10:43:45.933917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.422 [2024-11-20 10:43:45.946215] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.422 [2024-11-20 10:43:45.946584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.422 [2024-11-20 10:43:45.946602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.422 [2024-11-20 10:43:45.946611] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.422 [2024-11-20 10:43:45.946786] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.422 [2024-11-20 10:43:45.946960] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.422 [2024-11-20 10:43:45.946969] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.422 [2024-11-20 10:43:45.946976] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.422 [2024-11-20 10:43:45.946982] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.422 [2024-11-20 10:43:45.959295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.422 [2024-11-20 10:43:45.959644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.422 [2024-11-20 10:43:45.959662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.422 [2024-11-20 10:43:45.959673] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.422 [2024-11-20 10:43:45.959846] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.422 [2024-11-20 10:43:45.960022] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.422 [2024-11-20 10:43:45.960032] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.422 [2024-11-20 10:43:45.960039] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.422 [2024-11-20 10:43:45.960046] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.422 [2024-11-20 10:43:45.972325] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.422 [2024-11-20 10:43:45.972728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.422 [2024-11-20 10:43:45.972746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.422 [2024-11-20 10:43:45.972754] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.422 [2024-11-20 10:43:45.972925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.422 [2024-11-20 10:43:45.973098] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.422 [2024-11-20 10:43:45.973108] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.422 [2024-11-20 10:43:45.973114] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.422 [2024-11-20 10:43:45.973121] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.422 [2024-11-20 10:43:45.985182] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.422 [2024-11-20 10:43:45.985614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.422 [2024-11-20 10:43:45.985659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.422 [2024-11-20 10:43:45.985682] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.422 [2024-11-20 10:43:45.986172] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.422 [2024-11-20 10:43:45.986347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.422 [2024-11-20 10:43:45.986357] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.422 [2024-11-20 10:43:45.986364] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.422 [2024-11-20 10:43:45.986371] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.422 [2024-11-20 10:43:45.997928] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.422 [2024-11-20 10:43:45.998335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.422 [2024-11-20 10:43:45.998352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.422 [2024-11-20 10:43:45.998361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.422 [2024-11-20 10:43:45.998518] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.422 [2024-11-20 10:43:45.998680] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.423 [2024-11-20 10:43:45.998690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.423 [2024-11-20 10:43:45.998696] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.423 [2024-11-20 10:43:45.998702] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.423 [2024-11-20 10:43:46.010672] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.423 [2024-11-20 10:43:46.011089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.423 [2024-11-20 10:43:46.011135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.423 [2024-11-20 10:43:46.011158] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.423 [2024-11-20 10:43:46.011671] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.423 [2024-11-20 10:43:46.012060] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.423 [2024-11-20 10:43:46.012080] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.423 [2024-11-20 10:43:46.012095] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.423 [2024-11-20 10:43:46.012109] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.423 [2024-11-20 10:43:46.025780] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.423 [2024-11-20 10:43:46.026230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.423 [2024-11-20 10:43:46.026276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.423 [2024-11-20 10:43:46.026300] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.423 [2024-11-20 10:43:46.026792] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.423 [2024-11-20 10:43:46.027046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.423 [2024-11-20 10:43:46.027060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.423 [2024-11-20 10:43:46.027070] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.423 [2024-11-20 10:43:46.027080] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.423 [2024-11-20 10:43:46.038729] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.423 [2024-11-20 10:43:46.039144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.423 [2024-11-20 10:43:46.039191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.423 [2024-11-20 10:43:46.039232] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.423 [2024-11-20 10:43:46.039703] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.423 [2024-11-20 10:43:46.039872] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.423 [2024-11-20 10:43:46.039881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.423 [2024-11-20 10:43:46.039891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.423 [2024-11-20 10:43:46.039898] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.423 [2024-11-20 10:43:46.051518] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.423 [2024-11-20 10:43:46.051924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.423 [2024-11-20 10:43:46.051941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.423 [2024-11-20 10:43:46.051948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.423 [2024-11-20 10:43:46.052105] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.423 [2024-11-20 10:43:46.052286] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.423 [2024-11-20 10:43:46.052297] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.423 [2024-11-20 10:43:46.052304] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.423 [2024-11-20 10:43:46.052311] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.423 [2024-11-20 10:43:46.064341] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.423 [2024-11-20 10:43:46.064748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.423 [2024-11-20 10:43:46.064788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.423 [2024-11-20 10:43:46.064815] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.423 [2024-11-20 10:43:46.065406] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.423 [2024-11-20 10:43:46.065575] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.423 [2024-11-20 10:43:46.065584] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.423 [2024-11-20 10:43:46.065591] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.423 [2024-11-20 10:43:46.065598] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.423 [2024-11-20 10:43:46.077072] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.423 [2024-11-20 10:43:46.077447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.423 [2024-11-20 10:43:46.077493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.423 [2024-11-20 10:43:46.077517] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.423 [2024-11-20 10:43:46.078093] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.423 [2024-11-20 10:43:46.078687] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.423 [2024-11-20 10:43:46.078715] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.423 [2024-11-20 10:43:46.078740] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.423 [2024-11-20 10:43:46.078747] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.423 [2024-11-20 10:43:46.089862] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.423 [2024-11-20 10:43:46.090195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.423 [2024-11-20 10:43:46.090250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.423 [2024-11-20 10:43:46.090275] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.423 [2024-11-20 10:43:46.090715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.423 [2024-11-20 10:43:46.090876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.423 [2024-11-20 10:43:46.090885] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.423 [2024-11-20 10:43:46.090891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.423 [2024-11-20 10:43:46.090897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.423 [2024-11-20 10:43:46.102610] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.423 [2024-11-20 10:43:46.103015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.423 [2024-11-20 10:43:46.103032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.423 [2024-11-20 10:43:46.103039] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.423 [2024-11-20 10:43:46.103197] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.423 [2024-11-20 10:43:46.103384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.423 [2024-11-20 10:43:46.103395] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.423 [2024-11-20 10:43:46.103401] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.423 [2024-11-20 10:43:46.103407] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.423 [2024-11-20 10:43:46.115345] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.423 [2024-11-20 10:43:46.115738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.423 [2024-11-20 10:43:46.115754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.423 [2024-11-20 10:43:46.115761] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.423 [2024-11-20 10:43:46.115919] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.423 [2024-11-20 10:43:46.116078] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.423 [2024-11-20 10:43:46.116087] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.423 [2024-11-20 10:43:46.116094] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.423 [2024-11-20 10:43:46.116101] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.423 [2024-11-20 10:43:46.128057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.423 [2024-11-20 10:43:46.128484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.423 [2024-11-20 10:43:46.128529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.423 [2024-11-20 10:43:46.128562] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.423 [2024-11-20 10:43:46.129005] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.423 [2024-11-20 10:43:46.129165] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.424 [2024-11-20 10:43:46.129175] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.424 [2024-11-20 10:43:46.129181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.424 [2024-11-20 10:43:46.129187] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.424 [2024-11-20 10:43:46.140861] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.424 [2024-11-20 10:43:46.141258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.424 [2024-11-20 10:43:46.141304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.424 [2024-11-20 10:43:46.141328] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.424 [2024-11-20 10:43:46.141777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.424 [2024-11-20 10:43:46.141937] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.424 [2024-11-20 10:43:46.141946] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.424 [2024-11-20 10:43:46.141953] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.424 [2024-11-20 10:43:46.141959] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.683 [2024-11-20 10:43:46.153752] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.683 [2024-11-20 10:43:46.154177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.683 [2024-11-20 10:43:46.154196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.683 [2024-11-20 10:43:46.154210] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.683 [2024-11-20 10:43:46.154397] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.683 [2024-11-20 10:43:46.154573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.683 [2024-11-20 10:43:46.154583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.683 [2024-11-20 10:43:46.154589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.683 [2024-11-20 10:43:46.154596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.683 [2024-11-20 10:43:46.166466] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.683 [2024-11-20 10:43:46.166878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.683 [2024-11-20 10:43:46.166896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.683 [2024-11-20 10:43:46.166903] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.683 [2024-11-20 10:43:46.167061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.683 [2024-11-20 10:43:46.167225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.683 [2024-11-20 10:43:46.167240] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.683 [2024-11-20 10:43:46.167246] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.683 [2024-11-20 10:43:46.167271] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.683 [2024-11-20 10:43:46.179296] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.683 [2024-11-20 10:43:46.179634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.683 [2024-11-20 10:43:46.179652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.683 [2024-11-20 10:43:46.179660] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.683 [2024-11-20 10:43:46.179817] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.683 [2024-11-20 10:43:46.179976] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.683 [2024-11-20 10:43:46.179984] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.683 [2024-11-20 10:43:46.179990] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.683 [2024-11-20 10:43:46.179997] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.683 [2024-11-20 10:43:46.192252] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.683 [2024-11-20 10:43:46.192587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.683 [2024-11-20 10:43:46.192604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.683 [2024-11-20 10:43:46.192612] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.683 [2024-11-20 10:43:46.192770] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.683 [2024-11-20 10:43:46.192929] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.683 [2024-11-20 10:43:46.192938] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.683 [2024-11-20 10:43:46.192945] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.683 [2024-11-20 10:43:46.192951] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.683 [2024-11-20 10:43:46.205302] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.683 [2024-11-20 10:43:46.205727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.683 [2024-11-20 10:43:46.205745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.683 [2024-11-20 10:43:46.205753] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.683 [2024-11-20 10:43:46.205925] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.683 [2024-11-20 10:43:46.206104] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.683 [2024-11-20 10:43:46.206113] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.683 [2024-11-20 10:43:46.206120] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.683 [2024-11-20 10:43:46.206130] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.683 [2024-11-20 10:43:46.218129] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.683 [2024-11-20 10:43:46.218545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.683 [2024-11-20 10:43:46.218564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.683 [2024-11-20 10:43:46.218571] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.683 [2024-11-20 10:43:46.218729] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.683 [2024-11-20 10:43:46.218888] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.683 [2024-11-20 10:43:46.218897] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.683 [2024-11-20 10:43:46.218903] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.683 [2024-11-20 10:43:46.218910] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.683 [2024-11-20 10:43:46.230955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.683 [2024-11-20 10:43:46.231346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.683 [2024-11-20 10:43:46.231363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.683 [2024-11-20 10:43:46.231372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.683 [2024-11-20 10:43:46.231531] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.683 [2024-11-20 10:43:46.231688] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.683 [2024-11-20 10:43:46.231697] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.683 [2024-11-20 10:43:46.231704] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.683 [2024-11-20 10:43:46.231710] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.683 [2024-11-20 10:43:46.243768] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.683 [2024-11-20 10:43:46.244182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.683 [2024-11-20 10:43:46.244232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.683 [2024-11-20 10:43:46.244259] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.683 [2024-11-20 10:43:46.244766] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.683 [2024-11-20 10:43:46.244926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.683 [2024-11-20 10:43:46.244935] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.683 [2024-11-20 10:43:46.244941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.683 [2024-11-20 10:43:46.244948] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.683 [2024-11-20 10:43:46.256516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.683 [2024-11-20 10:43:46.256930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.683 [2024-11-20 10:43:46.256973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.683 [2024-11-20 10:43:46.256997] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.683 [2024-11-20 10:43:46.257590] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.683 [2024-11-20 10:43:46.258130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.683 [2024-11-20 10:43:46.258139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.683 [2024-11-20 10:43:46.258146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.683 [2024-11-20 10:43:46.258152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.683 [2024-11-20 10:43:46.269253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.683 [2024-11-20 10:43:46.269696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.683 [2024-11-20 10:43:46.269741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.683 [2024-11-20 10:43:46.269765] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.684 [2024-11-20 10:43:46.270151] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.684 [2024-11-20 10:43:46.270336] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.684 [2024-11-20 10:43:46.270346] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.684 [2024-11-20 10:43:46.270352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.684 [2024-11-20 10:43:46.270359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.684 [2024-11-20 10:43:46.282089] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.684 [2024-11-20 10:43:46.282500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.684 [2024-11-20 10:43:46.282516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.684 [2024-11-20 10:43:46.282523] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.684 [2024-11-20 10:43:46.282680] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.684 [2024-11-20 10:43:46.282836] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.684 [2024-11-20 10:43:46.282844] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.684 [2024-11-20 10:43:46.282850] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.684 [2024-11-20 10:43:46.282856] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.684 [2024-11-20 10:43:46.295087] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.684 [2024-11-20 10:43:46.295511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.684 [2024-11-20 10:43:46.295529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.684 [2024-11-20 10:43:46.295536] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.684 [2024-11-20 10:43:46.295707] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.684 [2024-11-20 10:43:46.295876] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.684 [2024-11-20 10:43:46.295886] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.684 [2024-11-20 10:43:46.295893] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.684 [2024-11-20 10:43:46.295899] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.684 [2024-11-20 10:43:46.308041] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.684 [2024-11-20 10:43:46.308446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.684 [2024-11-20 10:43:46.308464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.684 [2024-11-20 10:43:46.308472] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.684 [2024-11-20 10:43:46.308640] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.684 [2024-11-20 10:43:46.308807] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.684 [2024-11-20 10:43:46.308817] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.684 [2024-11-20 10:43:46.308824] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.684 [2024-11-20 10:43:46.308830] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.684 [2024-11-20 10:43:46.320844] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.684 [2024-11-20 10:43:46.321231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.684 [2024-11-20 10:43:46.321249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.684 [2024-11-20 10:43:46.321256] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.684 [2024-11-20 10:43:46.321415] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.684 [2024-11-20 10:43:46.321573] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.684 [2024-11-20 10:43:46.321583] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.684 [2024-11-20 10:43:46.321589] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.684 [2024-11-20 10:43:46.321596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.684 [2024-11-20 10:43:46.333641] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.684 [2024-11-20 10:43:46.334095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.684 [2024-11-20 10:43:46.334141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.684 [2024-11-20 10:43:46.334163] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.684 [2024-11-20 10:43:46.334755] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.684 [2024-11-20 10:43:46.335347] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.684 [2024-11-20 10:43:46.335388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.684 [2024-11-20 10:43:46.335395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.684 [2024-11-20 10:43:46.335401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.684 [2024-11-20 10:43:46.348736] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.684 [2024-11-20 10:43:46.349257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.684 [2024-11-20 10:43:46.349281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.684 [2024-11-20 10:43:46.349292] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.684 [2024-11-20 10:43:46.349542] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.684 [2024-11-20 10:43:46.349796] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.684 [2024-11-20 10:43:46.349810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.684 [2024-11-20 10:43:46.349820] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.684 [2024-11-20 10:43:46.349829] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.684 [2024-11-20 10:43:46.361593] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.684 [2024-11-20 10:43:46.362016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.684 [2024-11-20 10:43:46.362069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.684 [2024-11-20 10:43:46.362094] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.684 [2024-11-20 10:43:46.362614] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.684 [2024-11-20 10:43:46.362784] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.684 [2024-11-20 10:43:46.362794] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.684 [2024-11-20 10:43:46.362800] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.684 [2024-11-20 10:43:46.362806] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.684 [2024-11-20 10:43:46.374475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.684 [2024-11-20 10:43:46.374803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.684 [2024-11-20 10:43:46.374820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.684 [2024-11-20 10:43:46.374827] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.684 [2024-11-20 10:43:46.374985] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.684 [2024-11-20 10:43:46.375144] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.684 [2024-11-20 10:43:46.375154] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.684 [2024-11-20 10:43:46.375160] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.684 [2024-11-20 10:43:46.375170] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.684 [2024-11-20 10:43:46.387265] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.684 [2024-11-20 10:43:46.387593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.684 [2024-11-20 10:43:46.387610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.684 [2024-11-20 10:43:46.387617] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.684 [2024-11-20 10:43:46.387774] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.684 [2024-11-20 10:43:46.387934] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.684 [2024-11-20 10:43:46.387944] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.684 [2024-11-20 10:43:46.387950] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.684 [2024-11-20 10:43:46.387956] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.684 [2024-11-20 10:43:46.400001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.684 [2024-11-20 10:43:46.400412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.684 [2024-11-20 10:43:46.400429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.685 [2024-11-20 10:43:46.400437] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.685 [2024-11-20 10:43:46.400595] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.685 [2024-11-20 10:43:46.400754] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.685 [2024-11-20 10:43:46.400763] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.685 [2024-11-20 10:43:46.400769] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.685 [2024-11-20 10:43:46.400775] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.944 [2024-11-20 10:43:46.412882] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.944 [2024-11-20 10:43:46.413280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.944 [2024-11-20 10:43:46.413316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.944 [2024-11-20 10:43:46.413325] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.944 [2024-11-20 10:43:46.413500] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.944 [2024-11-20 10:43:46.413662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.944 [2024-11-20 10:43:46.413673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.944 [2024-11-20 10:43:46.413679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.944 [2024-11-20 10:43:46.413686] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.944 [2024-11-20 10:43:46.425767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.944 [2024-11-20 10:43:46.426171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.944 [2024-11-20 10:43:46.426193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.944 [2024-11-20 10:43:46.426207] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.944 [2024-11-20 10:43:46.426374] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.944 [2024-11-20 10:43:46.426548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.944 [2024-11-20 10:43:46.426557] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.944 [2024-11-20 10:43:46.426563] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.944 [2024-11-20 10:43:46.426570] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.944 [2024-11-20 10:43:46.438639] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.944 [2024-11-20 10:43:46.439017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.944 [2024-11-20 10:43:46.439035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.944 [2024-11-20 10:43:46.439043] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.944 [2024-11-20 10:43:46.439218] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.944 [2024-11-20 10:43:46.439387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.944 [2024-11-20 10:43:46.439397] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.944 [2024-11-20 10:43:46.439404] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.944 [2024-11-20 10:43:46.439411] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.944 [2024-11-20 10:43:46.451479] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.944 [2024-11-20 10:43:46.451836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.944 [2024-11-20 10:43:46.451854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.944 [2024-11-20 10:43:46.451862] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.944 [2024-11-20 10:43:46.452030] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.944 [2024-11-20 10:43:46.452197] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.944 [2024-11-20 10:43:46.452216] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.944 [2024-11-20 10:43:46.452223] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.944 [2024-11-20 10:43:46.452230] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.944 [2024-11-20 10:43:46.464430] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.944 [2024-11-20 10:43:46.464764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.944 [2024-11-20 10:43:46.464783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.944 [2024-11-20 10:43:46.464791] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.944 [2024-11-20 10:43:46.464962] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.944 [2024-11-20 10:43:46.465130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.944 [2024-11-20 10:43:46.465139] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.944 [2024-11-20 10:43:46.465146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.944 [2024-11-20 10:43:46.465153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.944 [2024-11-20 10:43:46.477154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.944 [2024-11-20 10:43:46.477533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.944 [2024-11-20 10:43:46.477578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.944 [2024-11-20 10:43:46.477602] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.944 [2024-11-20 10:43:46.478180] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.944 [2024-11-20 10:43:46.478653] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.944 [2024-11-20 10:43:46.478663] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.944 [2024-11-20 10:43:46.478669] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.944 [2024-11-20 10:43:46.478675] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.944 [2024-11-20 10:43:46.489952] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.944 [2024-11-20 10:43:46.490330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.944 [2024-11-20 10:43:46.490377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.944 [2024-11-20 10:43:46.490401] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.944 [2024-11-20 10:43:46.490916] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.944 [2024-11-20 10:43:46.491086] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.944 [2024-11-20 10:43:46.491095] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.944 [2024-11-20 10:43:46.491102] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.944 [2024-11-20 10:43:46.491108] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.944 [2024-11-20 10:43:46.502837] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.944 [2024-11-20 10:43:46.503276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.944 [2024-11-20 10:43:46.503322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.944 [2024-11-20 10:43:46.503346] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.944 [2024-11-20 10:43:46.503734] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.944 [2024-11-20 10:43:46.503893] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.944 [2024-11-20 10:43:46.503906] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.944 [2024-11-20 10:43:46.503912] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.944 [2024-11-20 10:43:46.503919] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.944 [2024-11-20 10:43:46.515709] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.944 [2024-11-20 10:43:46.516124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.944 [2024-11-20 10:43:46.516163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.944 [2024-11-20 10:43:46.516189] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.944 [2024-11-20 10:43:46.516747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.944 [2024-11-20 10:43:46.516908] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.944 [2024-11-20 10:43:46.516917] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.944 [2024-11-20 10:43:46.516923] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.944 [2024-11-20 10:43:46.516929] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.944 [2024-11-20 10:43:46.528622] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.944 [2024-11-20 10:43:46.529030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.944 [2024-11-20 10:43:46.529071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.944 [2024-11-20 10:43:46.529095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.945 [2024-11-20 10:43:46.529639] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.945 [2024-11-20 10:43:46.529800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.945 [2024-11-20 10:43:46.529810] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.945 [2024-11-20 10:43:46.529816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.945 [2024-11-20 10:43:46.529823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.945 [2024-11-20 10:43:46.541468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.945 [2024-11-20 10:43:46.541877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.945 [2024-11-20 10:43:46.541894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.945 [2024-11-20 10:43:46.541901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.945 [2024-11-20 10:43:46.542059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.945 [2024-11-20 10:43:46.542223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.945 [2024-11-20 10:43:46.542233] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.945 [2024-11-20 10:43:46.542240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.945 [2024-11-20 10:43:46.542246] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.945 [2024-11-20 10:43:46.554287] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.945 [2024-11-20 10:43:46.554701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.945 [2024-11-20 10:43:46.554718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.945 [2024-11-20 10:43:46.554725] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.945 [2024-11-20 10:43:46.554884] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.945 [2024-11-20 10:43:46.555042] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.945 [2024-11-20 10:43:46.555052] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.945 [2024-11-20 10:43:46.555058] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.945 [2024-11-20 10:43:46.555065] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.945 [2024-11-20 10:43:46.567038] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.945 [2024-11-20 10:43:46.567445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.945 [2024-11-20 10:43:46.567491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.945 [2024-11-20 10:43:46.567514] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.945 [2024-11-20 10:43:46.568024] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.945 [2024-11-20 10:43:46.568398] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.945 [2024-11-20 10:43:46.568416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.945 [2024-11-20 10:43:46.568430] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.945 [2024-11-20 10:43:46.568443] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.945 [2024-11-20 10:43:46.581650] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.945 [2024-11-20 10:43:46.582194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.945 [2024-11-20 10:43:46.582254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.945 [2024-11-20 10:43:46.582277] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.945 [2024-11-20 10:43:46.582784] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.945 [2024-11-20 10:43:46.583027] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.945 [2024-11-20 10:43:46.583040] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.945 [2024-11-20 10:43:46.583048] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.945 [2024-11-20 10:43:46.583058] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.945 [2024-11-20 10:43:46.594498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.945 [2024-11-20 10:43:46.594918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.945 [2024-11-20 10:43:46.594970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.945 [2024-11-20 10:43:46.594994] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.945 [2024-11-20 10:43:46.595586] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.945 [2024-11-20 10:43:46.596068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.945 [2024-11-20 10:43:46.596078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.945 [2024-11-20 10:43:46.596085] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.945 [2024-11-20 10:43:46.596091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.945 [2024-11-20 10:43:46.607240] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.945 [2024-11-20 10:43:46.607646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.945 [2024-11-20 10:43:46.607662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.945 [2024-11-20 10:43:46.607670] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.945 [2024-11-20 10:43:46.607827] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.945 [2024-11-20 10:43:46.607985] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.945 [2024-11-20 10:43:46.607995] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.945 [2024-11-20 10:43:46.608001] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.945 [2024-11-20 10:43:46.608008] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.945 [2024-11-20 10:43:46.620052] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.945 [2024-11-20 10:43:46.620472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.945 [2024-11-20 10:43:46.620517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.945 [2024-11-20 10:43:46.620542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.945 [2024-11-20 10:43:46.621006] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.945 [2024-11-20 10:43:46.621166] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.945 [2024-11-20 10:43:46.621176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.945 [2024-11-20 10:43:46.621183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.945 [2024-11-20 10:43:46.621189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.945 [2024-11-20 10:43:46.632840] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.946 [2024-11-20 10:43:46.633247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.946 [2024-11-20 10:43:46.633265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.946 [2024-11-20 10:43:46.633273] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.946 [2024-11-20 10:43:46.633435] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.946 [2024-11-20 10:43:46.633593] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.946 [2024-11-20 10:43:46.633602] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.946 [2024-11-20 10:43:46.633608] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.946 [2024-11-20 10:43:46.633614] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.946 [2024-11-20 10:43:46.645579] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.946 [2024-11-20 10:43:46.645994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.946 [2024-11-20 10:43:46.646044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.946 [2024-11-20 10:43:46.646067] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.946 [2024-11-20 10:43:46.646661] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.946 [2024-11-20 10:43:46.647188] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.946 [2024-11-20 10:43:46.647197] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.946 [2024-11-20 10:43:46.647208] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.946 [2024-11-20 10:43:46.647215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:05.946 7183.75 IOPS, 28.06 MiB/s [2024-11-20T09:43:46.677Z] [2024-11-20 10:43:46.658410] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:05.946 [2024-11-20 10:43:46.658820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:05.946 [2024-11-20 10:43:46.658837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:05.946 [2024-11-20 10:43:46.658845] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:05.946 [2024-11-20 10:43:46.659003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:05.946 [2024-11-20 10:43:46.659163] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:05.946 [2024-11-20 10:43:46.659173] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:05.946 [2024-11-20 10:43:46.659179] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:05.946 [2024-11-20 10:43:46.659185] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.205 [2024-11-20 10:43:46.671295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.205 [2024-11-20 10:43:46.671641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.205 [2024-11-20 10:43:46.671659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.205 [2024-11-20 10:43:46.671667] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.205 [2024-11-20 10:43:46.671835] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.205 [2024-11-20 10:43:46.672005] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.205 [2024-11-20 10:43:46.672018] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.205 [2024-11-20 10:43:46.672025] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.205 [2024-11-20 10:43:46.672032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.205 [2024-11-20 10:43:46.684125] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.205 [2024-11-20 10:43:46.684471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.205 [2024-11-20 10:43:46.684490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.205 [2024-11-20 10:43:46.684498] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.205 [2024-11-20 10:43:46.684656] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.205 [2024-11-20 10:43:46.684815] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.205 [2024-11-20 10:43:46.684825] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.205 [2024-11-20 10:43:46.684832] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.205 [2024-11-20 10:43:46.684838] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.205 [2024-11-20 10:43:46.696926] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.205 [2024-11-20 10:43:46.697345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.205 [2024-11-20 10:43:46.697364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.205 [2024-11-20 10:43:46.697372] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.205 [2024-11-20 10:43:46.697541] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.205 [2024-11-20 10:43:46.697710] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.205 [2024-11-20 10:43:46.697719] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.205 [2024-11-20 10:43:46.697726] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.205 [2024-11-20 10:43:46.697732] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.205 [2024-11-20 10:43:46.710021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.205 [2024-11-20 10:43:46.710393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.205 [2024-11-20 10:43:46.710412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.205 [2024-11-20 10:43:46.710420] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.205 [2024-11-20 10:43:46.710592] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.205 [2024-11-20 10:43:46.710764] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.205 [2024-11-20 10:43:46.710774] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.205 [2024-11-20 10:43:46.710782] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.205 [2024-11-20 10:43:46.710789] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.205 [2024-11-20 10:43:46.723070] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.205 [2024-11-20 10:43:46.723474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.206 [2024-11-20 10:43:46.723493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.206 [2024-11-20 10:43:46.723501] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.206 [2024-11-20 10:43:46.723673] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.206 [2024-11-20 10:43:46.723845] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.206 [2024-11-20 10:43:46.723855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.206 [2024-11-20 10:43:46.723862] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.206 [2024-11-20 10:43:46.723869] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.206 [2024-11-20 10:43:46.736153] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.206 [2024-11-20 10:43:46.736516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.206 [2024-11-20 10:43:46.736534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.206 [2024-11-20 10:43:46.736542] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.206 [2024-11-20 10:43:46.736715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.206 [2024-11-20 10:43:46.736889] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.206 [2024-11-20 10:43:46.736899] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.206 [2024-11-20 10:43:46.736906] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.206 [2024-11-20 10:43:46.736913] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.206 [2024-11-20 10:43:46.749154] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.206 [2024-11-20 10:43:46.749575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.206 [2024-11-20 10:43:46.749593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.206 [2024-11-20 10:43:46.749601] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.206 [2024-11-20 10:43:46.749768] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.206 [2024-11-20 10:43:46.749935] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.206 [2024-11-20 10:43:46.749945] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.206 [2024-11-20 10:43:46.749952] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.206 [2024-11-20 10:43:46.749961] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.206 [2024-11-20 10:43:46.762239] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.206 [2024-11-20 10:43:46.762605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.206 [2024-11-20 10:43:46.762627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.206 [2024-11-20 10:43:46.762635] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.206 [2024-11-20 10:43:46.762819] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.206 [2024-11-20 10:43:46.763014] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.206 [2024-11-20 10:43:46.763025] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.206 [2024-11-20 10:43:46.763032] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.206 [2024-11-20 10:43:46.763039] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.206 [2024-11-20 10:43:46.775037] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.206 [2024-11-20 10:43:46.775366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.206 [2024-11-20 10:43:46.775384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.206 [2024-11-20 10:43:46.775392] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.206 [2024-11-20 10:43:46.775558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.206 [2024-11-20 10:43:46.775726] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.206 [2024-11-20 10:43:46.775735] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.206 [2024-11-20 10:43:46.775742] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.206 [2024-11-20 10:43:46.775749] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.206 [2024-11-20 10:43:46.787909] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.206 [2024-11-20 10:43:46.788301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.206 [2024-11-20 10:43:46.788319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.206 [2024-11-20 10:43:46.788327] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.206 [2024-11-20 10:43:46.788502] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.206 [2024-11-20 10:43:46.788663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.206 [2024-11-20 10:43:46.788673] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.206 [2024-11-20 10:43:46.788679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.206 [2024-11-20 10:43:46.788685] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.206 [2024-11-20 10:43:46.800818] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.206 [2024-11-20 10:43:46.801198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.206 [2024-11-20 10:43:46.801257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.206 [2024-11-20 10:43:46.801281] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.206 [2024-11-20 10:43:46.801865] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.206 [2024-11-20 10:43:46.802377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.206 [2024-11-20 10:43:46.802387] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.206 [2024-11-20 10:43:46.802394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.206 [2024-11-20 10:43:46.802400] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.206 [2024-11-20 10:43:46.813585] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.206 [2024-11-20 10:43:46.813926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.206 [2024-11-20 10:43:46.813971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.206 [2024-11-20 10:43:46.813995] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.206 [2024-11-20 10:43:46.814490] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.206 [2024-11-20 10:43:46.814662] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.206 [2024-11-20 10:43:46.814672] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.206 [2024-11-20 10:43:46.814678] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.206 [2024-11-20 10:43:46.814684] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.206 [2024-11-20 10:43:46.826501] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.206 [2024-11-20 10:43:46.826913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.206 [2024-11-20 10:43:46.826931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.206 [2024-11-20 10:43:46.826938] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.206 [2024-11-20 10:43:46.827096] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.206 [2024-11-20 10:43:46.827260] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.206 [2024-11-20 10:43:46.827270] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.206 [2024-11-20 10:43:46.827276] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.206 [2024-11-20 10:43:46.827283] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.206 [2024-11-20 10:43:46.839499] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.206 [2024-11-20 10:43:46.839818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.206 [2024-11-20 10:43:46.839837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.206 [2024-11-20 10:43:46.839844] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.206 [2024-11-20 10:43:46.840003] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.206 [2024-11-20 10:43:46.840161] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.206 [2024-11-20 10:43:46.840170] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.206 [2024-11-20 10:43:46.840181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.206 [2024-11-20 10:43:46.840188] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.206 [2024-11-20 10:43:46.852357] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.206 [2024-11-20 10:43:46.852705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.206 [2024-11-20 10:43:46.852722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.206 [2024-11-20 10:43:46.852730] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.206 [2024-11-20 10:43:46.852888] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.206 [2024-11-20 10:43:46.853046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.206 [2024-11-20 10:43:46.853055] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.206 [2024-11-20 10:43:46.853061] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.206 [2024-11-20 10:43:46.853067] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.206 [2024-11-20 10:43:46.865084] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.206 [2024-11-20 10:43:46.865494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.206 [2024-11-20 10:43:46.865512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.206 [2024-11-20 10:43:46.865519] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.206 [2024-11-20 10:43:46.865685] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.206 [2024-11-20 10:43:46.865853] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.206 [2024-11-20 10:43:46.865863] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.206 [2024-11-20 10:43:46.865870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.206 [2024-11-20 10:43:46.865878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.206 [2024-11-20 10:43:46.877905] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.206 [2024-11-20 10:43:46.878322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.206 [2024-11-20 10:43:46.878341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.206 [2024-11-20 10:43:46.878348] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.206 [2024-11-20 10:43:46.878515] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.206 [2024-11-20 10:43:46.878683] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.206 [2024-11-20 10:43:46.878693] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.206 [2024-11-20 10:43:46.878700] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.206 [2024-11-20 10:43:46.878706] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.206 [2024-11-20 10:43:46.890779] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.206 [2024-11-20 10:43:46.891181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.206 [2024-11-20 10:43:46.891241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.206 [2024-11-20 10:43:46.891266] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.206 [2024-11-20 10:43:46.891764] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.206 [2024-11-20 10:43:46.891924] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.207 [2024-11-20 10:43:46.891933] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.207 [2024-11-20 10:43:46.891939] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.207 [2024-11-20 10:43:46.891946] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.207 [2024-11-20 10:43:46.903653] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.207 [2024-11-20 10:43:46.904057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.207 [2024-11-20 10:43:46.904074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.207 [2024-11-20 10:43:46.904082] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.207 [2024-11-20 10:43:46.904244] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.207 [2024-11-20 10:43:46.904404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.207 [2024-11-20 10:43:46.904413] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.207 [2024-11-20 10:43:46.904419] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.207 [2024-11-20 10:43:46.904425] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.207 [2024-11-20 10:43:46.916531] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.207 [2024-11-20 10:43:46.916922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.207 [2024-11-20 10:43:46.916940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.207 [2024-11-20 10:43:46.916948] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.207 [2024-11-20 10:43:46.917115] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.207 [2024-11-20 10:43:46.917290] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.207 [2024-11-20 10:43:46.917301] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.207 [2024-11-20 10:43:46.917307] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.207 [2024-11-20 10:43:46.917314] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.207 [2024-11-20 10:43:46.929530] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.207 [2024-11-20 10:43:46.929933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.207 [2024-11-20 10:43:46.929989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.207 [2024-11-20 10:43:46.930014] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.207 [2024-11-20 10:43:46.930613] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.207 [2024-11-20 10:43:46.930846] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.207 [2024-11-20 10:43:46.930859] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.207 [2024-11-20 10:43:46.930869] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.207 [2024-11-20 10:43:46.930879] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.466 [2024-11-20 10:43:46.942419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.466 [2024-11-20 10:43:46.942756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.466 [2024-11-20 10:43:46.942774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.466 [2024-11-20 10:43:46.942782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.466 [2024-11-20 10:43:46.942949] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.466 [2024-11-20 10:43:46.943118] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.466 [2024-11-20 10:43:46.943128] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.466 [2024-11-20 10:43:46.943135] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.466 [2024-11-20 10:43:46.943141] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.466 [2024-11-20 10:43:46.955246] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.466 [2024-11-20 10:43:46.955612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.466 [2024-11-20 10:43:46.955629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.466 [2024-11-20 10:43:46.955637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.466 [2024-11-20 10:43:46.955795] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.466 [2024-11-20 10:43:46.955954] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.466 [2024-11-20 10:43:46.955964] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.466 [2024-11-20 10:43:46.955971] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.466 [2024-11-20 10:43:46.955977] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.466 [2024-11-20 10:43:46.968277] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.466 [2024-11-20 10:43:46.968612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.466 [2024-11-20 10:43:46.968630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.466 [2024-11-20 10:43:46.968637] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.466 [2024-11-20 10:43:46.968809] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.466 [2024-11-20 10:43:46.968989] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.466 [2024-11-20 10:43:46.968999] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.466 [2024-11-20 10:43:46.969006] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.466 [2024-11-20 10:43:46.969013] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.466 [2024-11-20 10:43:46.981312] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.466 [2024-11-20 10:43:46.981668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.466 [2024-11-20 10:43:46.981686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.466 [2024-11-20 10:43:46.981694] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.466 [2024-11-20 10:43:46.981867] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.466 [2024-11-20 10:43:46.982038] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.466 [2024-11-20 10:43:46.982049] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.466 [2024-11-20 10:43:46.982056] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.466 [2024-11-20 10:43:46.982063] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.466 [2024-11-20 10:43:46.994365] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.466 [2024-11-20 10:43:46.994664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.466 [2024-11-20 10:43:46.994682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.466 [2024-11-20 10:43:46.994690] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.466 [2024-11-20 10:43:46.994861] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.466 [2024-11-20 10:43:46.995033] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.466 [2024-11-20 10:43:46.995044] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.466 [2024-11-20 10:43:46.995051] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.466 [2024-11-20 10:43:46.995057] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.466 [2024-11-20 10:43:47.007288] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.466 [2024-11-20 10:43:47.007547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.466 [2024-11-20 10:43:47.007565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.466 [2024-11-20 10:43:47.007572] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.466 [2024-11-20 10:43:47.007731] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.466 [2024-11-20 10:43:47.007890] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.466 [2024-11-20 10:43:47.007900] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.466 [2024-11-20 10:43:47.007910] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.466 [2024-11-20 10:43:47.007917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.466 [2024-11-20 10:43:47.020243] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.466 [2024-11-20 10:43:47.020516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.466 [2024-11-20 10:43:47.020532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.466 [2024-11-20 10:43:47.020540] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.466 [2024-11-20 10:43:47.020697] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.466 [2024-11-20 10:43:47.020856] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.466 [2024-11-20 10:43:47.020865] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.466 [2024-11-20 10:43:47.020872] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.466 [2024-11-20 10:43:47.020878] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.466 [2024-11-20 10:43:47.033097] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.466 [2024-11-20 10:43:47.033445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.466 [2024-11-20 10:43:47.033463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.466 [2024-11-20 10:43:47.033469] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.466 [2024-11-20 10:43:47.033627] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.466 [2024-11-20 10:43:47.033787] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.466 [2024-11-20 10:43:47.033796] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.466 [2024-11-20 10:43:47.033803] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.466 [2024-11-20 10:43:47.033809] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.466 [2024-11-20 10:43:47.045955] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.466 [2024-11-20 10:43:47.046317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.466 [2024-11-20 10:43:47.046335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.466 [2024-11-20 10:43:47.046343] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.467 [2024-11-20 10:43:47.046516] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.467 [2024-11-20 10:43:47.046676] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.467 [2024-11-20 10:43:47.046686] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.467 [2024-11-20 10:43:47.046693] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.467 [2024-11-20 10:43:47.046699] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.467 [2024-11-20 10:43:47.058907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.467 [2024-11-20 10:43:47.059189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.467 [2024-11-20 10:43:47.059214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.467 [2024-11-20 10:43:47.059221] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.467 [2024-11-20 10:43:47.059389] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.467 [2024-11-20 10:43:47.059557] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.467 [2024-11-20 10:43:47.059567] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.467 [2024-11-20 10:43:47.059573] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.467 [2024-11-20 10:43:47.059579] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.467 [2024-11-20 10:43:47.071804] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.467 [2024-11-20 10:43:47.072179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.467 [2024-11-20 10:43:47.072196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.467 [2024-11-20 10:43:47.072209] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.467 [2024-11-20 10:43:47.072367] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.467 [2024-11-20 10:43:47.072526] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.467 [2024-11-20 10:43:47.072536] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.467 [2024-11-20 10:43:47.072542] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.467 [2024-11-20 10:43:47.072548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.467 [2024-11-20 10:43:47.084630] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.467 [2024-11-20 10:43:47.085023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.467 [2024-11-20 10:43:47.085041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.467 [2024-11-20 10:43:47.085049] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.467 [2024-11-20 10:43:47.085212] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.467 [2024-11-20 10:43:47.085394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.467 [2024-11-20 10:43:47.085404] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.467 [2024-11-20 10:43:47.085410] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.467 [2024-11-20 10:43:47.085416] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.467 [2024-11-20 10:43:47.097565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.467 [2024-11-20 10:43:47.097998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.467 [2024-11-20 10:43:47.098042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.467 [2024-11-20 10:43:47.098074] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.467 [2024-11-20 10:43:47.098665] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.467 [2024-11-20 10:43:47.099175] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.467 [2024-11-20 10:43:47.099185] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.467 [2024-11-20 10:43:47.099191] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.467 [2024-11-20 10:43:47.099197] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.467 [2024-11-20 10:43:47.110372] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.467 [2024-11-20 10:43:47.110748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.467 [2024-11-20 10:43:47.110794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.467 [2024-11-20 10:43:47.110818] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.467 [2024-11-20 10:43:47.111305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.467 [2024-11-20 10:43:47.111475] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.467 [2024-11-20 10:43:47.111484] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.467 [2024-11-20 10:43:47.111491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.467 [2024-11-20 10:43:47.111497] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.467 [2024-11-20 10:43:47.123283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.467 [2024-11-20 10:43:47.123564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.467 [2024-11-20 10:43:47.123582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.467 [2024-11-20 10:43:47.123589] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.467 [2024-11-20 10:43:47.123747] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.467 [2024-11-20 10:43:47.123905] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.467 [2024-11-20 10:43:47.123915] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.467 [2024-11-20 10:43:47.123921] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.467 [2024-11-20 10:43:47.123927] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.467 [2024-11-20 10:43:47.136077] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.467 [2024-11-20 10:43:47.136461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.467 [2024-11-20 10:43:47.136506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.467 [2024-11-20 10:43:47.136530] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.467 [2024-11-20 10:43:47.137009] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.467 [2024-11-20 10:43:47.137172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.467 [2024-11-20 10:43:47.137180] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.467 [2024-11-20 10:43:47.137186] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.467 [2024-11-20 10:43:47.137192] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.467 [2024-11-20 10:43:47.148933] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.467 [2024-11-20 10:43:47.149344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.467 [2024-11-20 10:43:47.149373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.467 [2024-11-20 10:43:47.149381] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.467 [2024-11-20 10:43:47.149540] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.467 [2024-11-20 10:43:47.149700] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.467 [2024-11-20 10:43:47.149709] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.467 [2024-11-20 10:43:47.149715] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.467 [2024-11-20 10:43:47.149721] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.467 [2024-11-20 10:43:47.161680] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.467 [2024-11-20 10:43:47.162111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.467 [2024-11-20 10:43:47.162156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.467 [2024-11-20 10:43:47.162180] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.467 [2024-11-20 10:43:47.162777] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.467 [2024-11-20 10:43:47.163300] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.467 [2024-11-20 10:43:47.163310] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.467 [2024-11-20 10:43:47.163317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.467 [2024-11-20 10:43:47.163324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.467 [2024-11-20 10:43:47.174539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.467 [2024-11-20 10:43:47.174942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.467 [2024-11-20 10:43:47.174959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.467 [2024-11-20 10:43:47.174967] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.468 [2024-11-20 10:43:47.175125] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.468 [2024-11-20 10:43:47.175309] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.468 [2024-11-20 10:43:47.175320] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.468 [2024-11-20 10:43:47.175330] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.468 [2024-11-20 10:43:47.175337] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.468 [2024-11-20 10:43:47.187310] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.468 [2024-11-20 10:43:47.187724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.468 [2024-11-20 10:43:47.187773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.468 [2024-11-20 10:43:47.187796] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.468 [2024-11-20 10:43:47.188356] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.468 [2024-11-20 10:43:47.188542] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.468 [2024-11-20 10:43:47.188554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.468 [2024-11-20 10:43:47.188561] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.468 [2024-11-20 10:43:47.188568] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.726 [2024-11-20 10:43:47.200337] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.726 [2024-11-20 10:43:47.200765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.726 [2024-11-20 10:43:47.200784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.727 [2024-11-20 10:43:47.200792] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.727 [2024-11-20 10:43:47.200961] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.727 [2024-11-20 10:43:47.201130] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.727 [2024-11-20 10:43:47.201140] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.727 [2024-11-20 10:43:47.201147] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.727 [2024-11-20 10:43:47.201154] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.727 [2024-11-20 10:43:47.213185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.727 [2024-11-20 10:43:47.213609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.727 [2024-11-20 10:43:47.213626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.727 [2024-11-20 10:43:47.213634] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.727 [2024-11-20 10:43:47.213791] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.727 [2024-11-20 10:43:47.213949] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.727 [2024-11-20 10:43:47.213959] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.727 [2024-11-20 10:43:47.213965] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.727 [2024-11-20 10:43:47.213972] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.727 [2024-11-20 10:43:47.226021] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.727 [2024-11-20 10:43:47.226450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.727 [2024-11-20 10:43:47.226469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.727 [2024-11-20 10:43:47.226477] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.727 [2024-11-20 10:43:47.226645] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.727 [2024-11-20 10:43:47.226812] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.727 [2024-11-20 10:43:47.226822] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.727 [2024-11-20 10:43:47.226829] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.727 [2024-11-20 10:43:47.226836] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.727 [2024-11-20 10:43:47.239031] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.727 [2024-11-20 10:43:47.239390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.727 [2024-11-20 10:43:47.239408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.727 [2024-11-20 10:43:47.239416] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.727 [2024-11-20 10:43:47.239587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.727 [2024-11-20 10:43:47.239759] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.727 [2024-11-20 10:43:47.239769] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.727 [2024-11-20 10:43:47.239776] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.727 [2024-11-20 10:43:47.239782] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.727 [2024-11-20 10:43:47.251864] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.727 [2024-11-20 10:43:47.252288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.727 [2024-11-20 10:43:47.252336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.727 [2024-11-20 10:43:47.252361] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.727 [2024-11-20 10:43:47.252756] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.727 [2024-11-20 10:43:47.252916] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.727 [2024-11-20 10:43:47.252926] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.727 [2024-11-20 10:43:47.252932] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.727 [2024-11-20 10:43:47.252939] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.727 [2024-11-20 10:43:47.264702] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.727 [2024-11-20 10:43:47.265034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.727 [2024-11-20 10:43:47.265051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.727 [2024-11-20 10:43:47.265062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.727 [2024-11-20 10:43:47.265225] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.727 [2024-11-20 10:43:47.265384] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.727 [2024-11-20 10:43:47.265394] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.727 [2024-11-20 10:43:47.265399] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.727 [2024-11-20 10:43:47.265405] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.727 [2024-11-20 10:43:47.277496] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.727 [2024-11-20 10:43:47.277906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.727 [2024-11-20 10:43:47.277923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.727 [2024-11-20 10:43:47.277931] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.727 [2024-11-20 10:43:47.278089] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.727 [2024-11-20 10:43:47.278271] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.727 [2024-11-20 10:43:47.278282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.727 [2024-11-20 10:43:47.278289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.727 [2024-11-20 10:43:47.278295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.727 [2024-11-20 10:43:47.290217] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.727 [2024-11-20 10:43:47.290641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.727 [2024-11-20 10:43:47.290688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.727 [2024-11-20 10:43:47.290712] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.727 [2024-11-20 10:43:47.291305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.727 [2024-11-20 10:43:47.291492] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.727 [2024-11-20 10:43:47.291501] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.727 [2024-11-20 10:43:47.291508] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.727 [2024-11-20 10:43:47.291514] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.727 [2024-11-20 10:43:47.303104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.727 [2024-11-20 10:43:47.303441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.727 [2024-11-20 10:43:47.303459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.727 [2024-11-20 10:43:47.303467] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.728 [2024-11-20 10:43:47.303635] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.728 [2024-11-20 10:43:47.303808] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.728 [2024-11-20 10:43:47.303818] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.728 [2024-11-20 10:43:47.303825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.728 [2024-11-20 10:43:47.303831] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.728 [2024-11-20 10:43:47.315941] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.728 [2024-11-20 10:43:47.316363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.728 [2024-11-20 10:43:47.316381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.728 [2024-11-20 10:43:47.316389] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.728 [2024-11-20 10:43:47.316561] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.728 [2024-11-20 10:43:47.316721] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.728 [2024-11-20 10:43:47.316730] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.728 [2024-11-20 10:43:47.316737] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.728 [2024-11-20 10:43:47.316743] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.728 [2024-11-20 10:43:47.328755] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.728 [2024-11-20 10:43:47.329164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.728 [2024-11-20 10:43:47.329180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.728 [2024-11-20 10:43:47.329187] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.728 [2024-11-20 10:43:47.329351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.728 [2024-11-20 10:43:47.329510] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.728 [2024-11-20 10:43:47.329520] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.728 [2024-11-20 10:43:47.329526] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.728 [2024-11-20 10:43:47.329531] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.728 [2024-11-20 10:43:47.341698] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.728 [2024-11-20 10:43:47.342106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.728 [2024-11-20 10:43:47.342122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.728 [2024-11-20 10:43:47.342131] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.728 [2024-11-20 10:43:47.342294] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.728 [2024-11-20 10:43:47.342454] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.728 [2024-11-20 10:43:47.342464] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.728 [2024-11-20 10:43:47.342474] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.728 [2024-11-20 10:43:47.342481] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.728 [2024-11-20 10:43:47.354487] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.728 [2024-11-20 10:43:47.354917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.728 [2024-11-20 10:43:47.354960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.728 [2024-11-20 10:43:47.354984] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.728 [2024-11-20 10:43:47.355574] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.728 [2024-11-20 10:43:47.356080] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.728 [2024-11-20 10:43:47.356097] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.728 [2024-11-20 10:43:47.356112] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.728 [2024-11-20 10:43:47.356126] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.728 [2024-11-20 10:43:47.369308] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.728 [2024-11-20 10:43:47.369840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.728 [2024-11-20 10:43:47.369864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.728 [2024-11-20 10:43:47.369874] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.728 [2024-11-20 10:43:47.370127] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.728 [2024-11-20 10:43:47.370391] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.728 [2024-11-20 10:43:47.370406] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.728 [2024-11-20 10:43:47.370417] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.728 [2024-11-20 10:43:47.370427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.728 [2024-11-20 10:43:47.382375] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.728 [2024-11-20 10:43:47.382812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.728 [2024-11-20 10:43:47.382830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.728 [2024-11-20 10:43:47.382838] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.728 [2024-11-20 10:43:47.383011] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.728 [2024-11-20 10:43:47.383184] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.728 [2024-11-20 10:43:47.383194] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.728 [2024-11-20 10:43:47.383207] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.728 [2024-11-20 10:43:47.383214] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.728 [2024-11-20 10:43:47.395138] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.728 [2024-11-20 10:43:47.395536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.728 [2024-11-20 10:43:47.395553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.728 [2024-11-20 10:43:47.395560] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.728 [2024-11-20 10:43:47.395718] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.728 [2024-11-20 10:43:47.395875] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.728 [2024-11-20 10:43:47.395884] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.728 [2024-11-20 10:43:47.395891] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.728 [2024-11-20 10:43:47.395897] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.728 [2024-11-20 10:43:47.407958] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.728 [2024-11-20 10:43:47.408373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.728 [2024-11-20 10:43:47.408390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.728 [2024-11-20 10:43:47.408396] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.728 [2024-11-20 10:43:47.408554] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.728 [2024-11-20 10:43:47.408712] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.728 [2024-11-20 10:43:47.408721] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.728 [2024-11-20 10:43:47.408728] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.728 [2024-11-20 10:43:47.408734] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.728 [2024-11-20 10:43:47.420792] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.729 [2024-11-20 10:43:47.421208] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.729 [2024-11-20 10:43:47.421262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.729 [2024-11-20 10:43:47.421286] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.729 [2024-11-20 10:43:47.421833] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.729 [2024-11-20 10:43:47.421993] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.729 [2024-11-20 10:43:47.422002] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.729 [2024-11-20 10:43:47.422008] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.729 [2024-11-20 10:43:47.422014] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.729 [2024-11-20 10:43:47.433603] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.729 [2024-11-20 10:43:47.434013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.729 [2024-11-20 10:43:47.434054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.729 [2024-11-20 10:43:47.434087] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.729 [2024-11-20 10:43:47.434681] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.729 [2024-11-20 10:43:47.435195] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.729 [2024-11-20 10:43:47.435208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.729 [2024-11-20 10:43:47.435216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.729 [2024-11-20 10:43:47.435222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.729 [2024-11-20 10:43:47.446319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.729 [2024-11-20 10:43:47.446575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.729 [2024-11-20 10:43:47.446592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.729 [2024-11-20 10:43:47.446600] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.729 [2024-11-20 10:43:47.446757] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.729 [2024-11-20 10:43:47.446915] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.729 [2024-11-20 10:43:47.446925] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.729 [2024-11-20 10:43:47.446931] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.729 [2024-11-20 10:43:47.446937] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.988 [2024-11-20 10:43:47.459281] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.988 [2024-11-20 10:43:47.459701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.988 [2024-11-20 10:43:47.459750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.988 [2024-11-20 10:43:47.459775] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.988 [2024-11-20 10:43:47.460214] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.988 [2024-11-20 10:43:47.460406] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.988 [2024-11-20 10:43:47.460416] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.988 [2024-11-20 10:43:47.460422] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.988 [2024-11-20 10:43:47.460429] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.988 [2024-11-20 10:43:47.472180] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.988 [2024-11-20 10:43:47.472611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.988 [2024-11-20 10:43:47.472659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.988 [2024-11-20 10:43:47.472683] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.989 [2024-11-20 10:43:47.473187] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.989 [2024-11-20 10:43:47.473377] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.989 [2024-11-20 10:43:47.473388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.989 [2024-11-20 10:43:47.473395] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.989 [2024-11-20 10:43:47.473401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.989 [2024-11-20 10:43:47.484915] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.989 [2024-11-20 10:43:47.485309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.989 [2024-11-20 10:43:47.485328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.989 [2024-11-20 10:43:47.485336] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.989 [2024-11-20 10:43:47.485504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.989 [2024-11-20 10:43:47.485672] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.989 [2024-11-20 10:43:47.485682] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.989 [2024-11-20 10:43:47.485689] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.989 [2024-11-20 10:43:47.485695] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.989 [2024-11-20 10:43:47.497884] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.989 [2024-11-20 10:43:47.498262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.989 [2024-11-20 10:43:47.498282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.989 [2024-11-20 10:43:47.498290] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.989 [2024-11-20 10:43:47.498462] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.989 [2024-11-20 10:43:47.498635] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.989 [2024-11-20 10:43:47.498645] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.989 [2024-11-20 10:43:47.498653] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.989 [2024-11-20 10:43:47.498660] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.989 [2024-11-20 10:43:47.510828] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.989 [2024-11-20 10:43:47.511182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.989 [2024-11-20 10:43:47.511199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.989 [2024-11-20 10:43:47.511212] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.989 [2024-11-20 10:43:47.511380] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.989 [2024-11-20 10:43:47.511548] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.989 [2024-11-20 10:43:47.511558] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.989 [2024-11-20 10:43:47.511564] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.989 [2024-11-20 10:43:47.511575] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.989 [2024-11-20 10:43:47.523800] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.989 [2024-11-20 10:43:47.524145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.989 [2024-11-20 10:43:47.524190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.989 [2024-11-20 10:43:47.524229] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.989 [2024-11-20 10:43:47.524807] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.989 [2024-11-20 10:43:47.525334] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.989 [2024-11-20 10:43:47.525344] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.989 [2024-11-20 10:43:47.525352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.989 [2024-11-20 10:43:47.525359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.989 [2024-11-20 10:43:47.536683] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.989 [2024-11-20 10:43:47.537100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.989 [2024-11-20 10:43:47.537116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.989 [2024-11-20 10:43:47.537124] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.989 [2024-11-20 10:43:47.537305] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.989 [2024-11-20 10:43:47.537473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.989 [2024-11-20 10:43:47.537483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.989 [2024-11-20 10:43:47.537489] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.989 [2024-11-20 10:43:47.537496] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.989 [2024-11-20 10:43:47.549539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.989 [2024-11-20 10:43:47.549947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.989 [2024-11-20 10:43:47.549993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.989 [2024-11-20 10:43:47.550016] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.989 [2024-11-20 10:43:47.550469] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.989 [2024-11-20 10:43:47.550639] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.989 [2024-11-20 10:43:47.550649] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.989 [2024-11-20 10:43:47.550656] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.989 [2024-11-20 10:43:47.550662] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.989 [2024-11-20 10:43:47.562359] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.989 [2024-11-20 10:43:47.562711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.989 [2024-11-20 10:43:47.562727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.989 [2024-11-20 10:43:47.562734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.989 [2024-11-20 10:43:47.562892] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.989 [2024-11-20 10:43:47.563051] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.989 [2024-11-20 10:43:47.563060] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.989 [2024-11-20 10:43:47.563066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.989 [2024-11-20 10:43:47.563072] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.989 [2024-11-20 10:43:47.575185] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.989 [2024-11-20 10:43:47.575641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.989 [2024-11-20 10:43:47.575686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.989 [2024-11-20 10:43:47.575710] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.989 [2024-11-20 10:43:47.576198] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.989 [2024-11-20 10:43:47.576373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.989 [2024-11-20 10:43:47.576383] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.989 [2024-11-20 10:43:47.576390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.989 [2024-11-20 10:43:47.576396] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.989 [2024-11-20 10:43:47.587987] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.989 [2024-11-20 10:43:47.588339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.989 [2024-11-20 10:43:47.588357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.989 [2024-11-20 10:43:47.588364] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.990 [2024-11-20 10:43:47.588522] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.990 [2024-11-20 10:43:47.588681] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.990 [2024-11-20 10:43:47.588690] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.990 [2024-11-20 10:43:47.588697] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.990 [2024-11-20 10:43:47.588703] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.990 [2024-11-20 10:43:47.600785] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.990 [2024-11-20 10:43:47.601194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.990 [2024-11-20 10:43:47.601215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.990 [2024-11-20 10:43:47.601223] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.990 [2024-11-20 10:43:47.601386] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.990 [2024-11-20 10:43:47.601544] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.990 [2024-11-20 10:43:47.601554] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.990 [2024-11-20 10:43:47.601560] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.990 [2024-11-20 10:43:47.601566] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.990 [2024-11-20 10:43:47.613596] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.990 [2024-11-20 10:43:47.614009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.990 [2024-11-20 10:43:47.614026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.990 [2024-11-20 10:43:47.614034] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.990 [2024-11-20 10:43:47.614191] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.990 [2024-11-20 10:43:47.614378] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.990 [2024-11-20 10:43:47.614388] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.990 [2024-11-20 10:43:47.614394] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.990 [2024-11-20 10:43:47.614401] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.990 [2024-11-20 10:43:47.626321] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.990 [2024-11-20 10:43:47.626647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.990 [2024-11-20 10:43:47.626664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.990 [2024-11-20 10:43:47.626672] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.990 [2024-11-20 10:43:47.626829] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.990 [2024-11-20 10:43:47.626988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.990 [2024-11-20 10:43:47.626997] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.990 [2024-11-20 10:43:47.627003] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.990 [2024-11-20 10:43:47.627009] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.990 [2024-11-20 10:43:47.639156] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.990 [2024-11-20 10:43:47.639565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.990 [2024-11-20 10:43:47.639610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.990 [2024-11-20 10:43:47.639633] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.990 [2024-11-20 10:43:47.640222] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.990 [2024-11-20 10:43:47.640450] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.990 [2024-11-20 10:43:47.640462] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.990 [2024-11-20 10:43:47.640469] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.990 [2024-11-20 10:43:47.640476] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.990 5747.00 IOPS, 22.45 MiB/s [2024-11-20T09:43:47.721Z] [2024-11-20 10:43:47.653057] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.990 [2024-11-20 10:43:47.653457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.990 [2024-11-20 10:43:47.653476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.990 [2024-11-20 10:43:47.653483] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.990 [2024-11-20 10:43:47.653641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.990 [2024-11-20 10:43:47.653799] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.990 [2024-11-20 10:43:47.653808] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.990 [2024-11-20 10:43:47.653815] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.990 [2024-11-20 10:43:47.653821] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.990 [2024-11-20 10:43:47.665865] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.990 [2024-11-20 10:43:47.666279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.990 [2024-11-20 10:43:47.666298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.990 [2024-11-20 10:43:47.666306] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.990 [2024-11-20 10:43:47.666473] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.990 [2024-11-20 10:43:47.666641] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.990 [2024-11-20 10:43:47.666650] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.990 [2024-11-20 10:43:47.666657] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.990 [2024-11-20 10:43:47.666663] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.990 [2024-11-20 10:43:47.678696] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.990 [2024-11-20 10:43:47.679109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.990 [2024-11-20 10:43:47.679126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.990 [2024-11-20 10:43:47.679133] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.990 [2024-11-20 10:43:47.679314] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.990 [2024-11-20 10:43:47.679484] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.990 [2024-11-20 10:43:47.679493] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.990 [2024-11-20 10:43:47.679500] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.990 [2024-11-20 10:43:47.679511] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.990 [2024-11-20 10:43:47.691472] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.990 [2024-11-20 10:43:47.691872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.990 [2024-11-20 10:43:47.691889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.990 [2024-11-20 10:43:47.691896] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.990 [2024-11-20 10:43:47.692054] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.990 [2024-11-20 10:43:47.692218] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.990 [2024-11-20 10:43:47.692228] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.990 [2024-11-20 10:43:47.692234] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.990 [2024-11-20 10:43:47.692240] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:06.990 [2024-11-20 10:43:47.704198] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:06.990 [2024-11-20 10:43:47.704634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:06.990 [2024-11-20 10:43:47.704680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:06.990 [2024-11-20 10:43:47.704705] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:06.990 [2024-11-20 10:43:47.705299] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:06.990 [2024-11-20 10:43:47.705844] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:06.991 [2024-11-20 10:43:47.705854] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:06.991 [2024-11-20 10:43:47.705860] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:06.991 [2024-11-20 10:43:47.705867] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.248 [2024-11-20 10:43:47.717091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.248 [2024-11-20 10:43:47.717538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.248 [2024-11-20 10:43:47.717557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.248 [2024-11-20 10:43:47.717565] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.248 [2024-11-20 10:43:47.717724] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.248 [2024-11-20 10:43:47.717882] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.248 [2024-11-20 10:43:47.717893] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.248 [2024-11-20 10:43:47.717899] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.248 [2024-11-20 10:43:47.717905] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.248 [2024-11-20 10:43:47.729845] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.248 [2024-11-20 10:43:47.730246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.248 [2024-11-20 10:43:47.730288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.248 [2024-11-20 10:43:47.730315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.248 [2024-11-20 10:43:47.730847] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.248 [2024-11-20 10:43:47.731008] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.248 [2024-11-20 10:43:47.731017] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.248 [2024-11-20 10:43:47.731023] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.248 [2024-11-20 10:43:47.731029] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.248 [2024-11-20 10:43:47.742644] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.249 [2024-11-20 10:43:47.743058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.249 [2024-11-20 10:43:47.743076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.249 [2024-11-20 10:43:47.743085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.249 [2024-11-20 10:43:47.743275] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.249 [2024-11-20 10:43:47.743448] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.249 [2024-11-20 10:43:47.743458] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.249 [2024-11-20 10:43:47.743465] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.249 [2024-11-20 10:43:47.743472] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.249 [2024-11-20 10:43:47.755628] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.249 [2024-11-20 10:43:47.756059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.249 [2024-11-20 10:43:47.756077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.249 [2024-11-20 10:43:47.756085] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.249 [2024-11-20 10:43:47.756262] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.249 [2024-11-20 10:43:47.756436] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.249 [2024-11-20 10:43:47.756445] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.249 [2024-11-20 10:43:47.756452] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.249 [2024-11-20 10:43:47.756459] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.249 [2024-11-20 10:43:47.768511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.249 [2024-11-20 10:43:47.768833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.249 [2024-11-20 10:43:47.768878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.249 [2024-11-20 10:43:47.768902] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.249 [2024-11-20 10:43:47.769504] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.249 [2024-11-20 10:43:47.769988] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.249 [2024-11-20 10:43:47.769998] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.249 [2024-11-20 10:43:47.770005] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.249 [2024-11-20 10:43:47.770012] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.249 [2024-11-20 10:43:47.783539] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.249 [2024-11-20 10:43:47.783996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.249 [2024-11-20 10:43:47.784042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.249 [2024-11-20 10:43:47.784065] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.249 [2024-11-20 10:43:47.784587] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.249 [2024-11-20 10:43:47.784843] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.249 [2024-11-20 10:43:47.784855] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.249 [2024-11-20 10:43:47.784866] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.249 [2024-11-20 10:43:47.784875] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.249 [2024-11-20 10:43:47.796418] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.249 [2024-11-20 10:43:47.796840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.249 [2024-11-20 10:43:47.796857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.249 [2024-11-20 10:43:47.796864] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.249 [2024-11-20 10:43:47.797031] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.249 [2024-11-20 10:43:47.797198] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.249 [2024-11-20 10:43:47.797214] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.249 [2024-11-20 10:43:47.797221] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.249 [2024-11-20 10:43:47.797227] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.249 [2024-11-20 10:43:47.809283] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.249 [2024-11-20 10:43:47.809696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.249 [2024-11-20 10:43:47.809713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.249 [2024-11-20 10:43:47.809721] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.249 [2024-11-20 10:43:47.809878] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.249 [2024-11-20 10:43:47.810036] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.249 [2024-11-20 10:43:47.810048] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.249 [2024-11-20 10:43:47.810055] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.249 [2024-11-20 10:43:47.810061] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.249 [2024-11-20 10:43:47.822001] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.249 [2024-11-20 10:43:47.822339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.249 [2024-11-20 10:43:47.822400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.249 [2024-11-20 10:43:47.822423] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.249 [2024-11-20 10:43:47.822994] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.249 [2024-11-20 10:43:47.823154] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.249 [2024-11-20 10:43:47.823164] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.249 [2024-11-20 10:43:47.823170] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.249 [2024-11-20 10:43:47.823176] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.249 [2024-11-20 10:43:47.834775] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.249 [2024-11-20 10:43:47.835129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.249 [2024-11-20 10:43:47.835173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.249 [2024-11-20 10:43:47.835196] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.249 [2024-11-20 10:43:47.835702] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.249 [2024-11-20 10:43:47.835871] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.249 [2024-11-20 10:43:47.835881] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.249 [2024-11-20 10:43:47.835887] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.249 [2024-11-20 10:43:47.835894] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.249 [2024-11-20 10:43:47.847503] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.249 [2024-11-20 10:43:47.847876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.249 [2024-11-20 10:43:47.847893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.249 [2024-11-20 10:43:47.847901] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.249 [2024-11-20 10:43:47.848059] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.249 [2024-11-20 10:43:47.848223] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.249 [2024-11-20 10:43:47.848234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.249 [2024-11-20 10:43:47.848240] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.249 [2024-11-20 10:43:47.848265] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.249 [2024-11-20 10:43:47.860295] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.249 [2024-11-20 10:43:47.860627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.249 [2024-11-20 10:43:47.860672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.250 [2024-11-20 10:43:47.860696] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.250 [2024-11-20 10:43:47.861162] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.250 [2024-11-20 10:43:47.861348] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.250 [2024-11-20 10:43:47.861358] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.250 [2024-11-20 10:43:47.861365] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.250 [2024-11-20 10:43:47.861372] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.250 [2024-11-20 10:43:47.873040] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.250 [2024-11-20 10:43:47.873450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.250 [2024-11-20 10:43:47.873467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.250 [2024-11-20 10:43:47.873474] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.250 [2024-11-20 10:43:47.873633] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.250 [2024-11-20 10:43:47.873790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.250 [2024-11-20 10:43:47.873799] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.250 [2024-11-20 10:43:47.873805] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.250 [2024-11-20 10:43:47.873812] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.250 [2024-11-20 10:43:47.885799] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.250 [2024-11-20 10:43:47.886144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.250 [2024-11-20 10:43:47.886162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.250 [2024-11-20 10:43:47.886169] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.250 [2024-11-20 10:43:47.886359] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.250 [2024-11-20 10:43:47.886540] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.250 [2024-11-20 10:43:47.886549] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.250 [2024-11-20 10:43:47.886557] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.250 [2024-11-20 10:43:47.886563] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.250 [2024-11-20 10:43:47.898556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.250 [2024-11-20 10:43:47.898974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.250 [2024-11-20 10:43:47.898990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.250 [2024-11-20 10:43:47.898998] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.250 [2024-11-20 10:43:47.899155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.250 [2024-11-20 10:43:47.899338] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.250 [2024-11-20 10:43:47.899349] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.250 [2024-11-20 10:43:47.899355] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.250 [2024-11-20 10:43:47.899361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.250 [2024-11-20 10:43:47.911301] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.250 [2024-11-20 10:43:47.911692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.250 [2024-11-20 10:43:47.911709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.250 [2024-11-20 10:43:47.911716] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.250 [2024-11-20 10:43:47.911874] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.250 [2024-11-20 10:43:47.912032] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.250 [2024-11-20 10:43:47.912041] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.250 [2024-11-20 10:43:47.912047] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.250 [2024-11-20 10:43:47.912053] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.250 [2024-11-20 10:43:47.924104] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.250 [2024-11-20 10:43:47.924512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.250 [2024-11-20 10:43:47.924529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.250 [2024-11-20 10:43:47.924537] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.250 [2024-11-20 10:43:47.924695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.250 [2024-11-20 10:43:47.924854] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.250 [2024-11-20 10:43:47.924864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.250 [2024-11-20 10:43:47.924870] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.250 [2024-11-20 10:43:47.924876] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.250 [2024-11-20 10:43:47.936829] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.250 [2024-11-20 10:43:47.937238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.250 [2024-11-20 10:43:47.937255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.250 [2024-11-20 10:43:47.937263] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.250 [2024-11-20 10:43:47.937424] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.250 [2024-11-20 10:43:47.937583] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.250 [2024-11-20 10:43:47.937592] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.250 [2024-11-20 10:43:47.937598] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.250 [2024-11-20 10:43:47.937604] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.250 [2024-11-20 10:43:47.949663] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.250 [2024-11-20 10:43:47.950086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.250 [2024-11-20 10:43:47.950131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.250 [2024-11-20 10:43:47.950154] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.250 [2024-11-20 10:43:47.950743] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.250 [2024-11-20 10:43:47.951302] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.250 [2024-11-20 10:43:47.951321] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.250 [2024-11-20 10:43:47.951336] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.250 [2024-11-20 10:43:47.951351] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.250 [2024-11-20 10:43:47.964511] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.250 [2024-11-20 10:43:47.965031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.250 [2024-11-20 10:43:47.965075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.250 [2024-11-20 10:43:47.965099] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.250 [2024-11-20 10:43:47.965596] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.250 [2024-11-20 10:43:47.965851] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.250 [2024-11-20 10:43:47.965864] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.250 [2024-11-20 10:43:47.965874] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.251 [2024-11-20 10:43:47.965884] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.509 [2024-11-20 10:43:47.977438] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.509 [2024-11-20 10:43:47.977850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.509 [2024-11-20 10:43:47.977868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.509 [2024-11-20 10:43:47.977876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.509 [2024-11-20 10:43:47.978044] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.509 [2024-11-20 10:43:47.978220] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.509 [2024-11-20 10:43:47.978234] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.509 [2024-11-20 10:43:47.978241] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.510 [2024-11-20 10:43:47.978249] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.510 [2024-11-20 10:43:47.990419] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.510 [2024-11-20 10:43:47.990847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.510 [2024-11-20 10:43:47.990867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.510 [2024-11-20 10:43:47.990876] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.510 [2024-11-20 10:43:47.991035] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.510 [2024-11-20 10:43:47.991193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.510 [2024-11-20 10:43:47.991208] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.510 [2024-11-20 10:43:47.991215] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.510 [2024-11-20 10:43:47.991222] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.510 [2024-11-20 10:43:48.003225] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.510 [2024-11-20 10:43:48.003525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.510 [2024-11-20 10:43:48.003544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.510 [2024-11-20 10:43:48.003552] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.510 [2024-11-20 10:43:48.003719] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.510 [2024-11-20 10:43:48.003887] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.510 [2024-11-20 10:43:48.003898] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.510 [2024-11-20 10:43:48.003907] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.510 [2024-11-20 10:43:48.003917] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.510 [2024-11-20 10:43:48.016315] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.510 [2024-11-20 10:43:48.016653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.510 [2024-11-20 10:43:48.016671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.510 [2024-11-20 10:43:48.016680] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.510 [2024-11-20 10:43:48.016859] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.510 [2024-11-20 10:43:48.017046] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.510 [2024-11-20 10:43:48.017058] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.510 [2024-11-20 10:43:48.017066] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.510 [2024-11-20 10:43:48.017077] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.510 [2024-11-20 10:43:48.029191] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.510 [2024-11-20 10:43:48.029607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.510 [2024-11-20 10:43:48.029652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.510 [2024-11-20 10:43:48.029677] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.510 [2024-11-20 10:43:48.030234] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.510 [2024-11-20 10:43:48.030404] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.510 [2024-11-20 10:43:48.030414] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.510 [2024-11-20 10:43:48.030421] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.510 [2024-11-20 10:43:48.030427] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.510 [2024-11-20 10:43:48.042091] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.510 [2024-11-20 10:43:48.042440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.510 [2024-11-20 10:43:48.042458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.510 [2024-11-20 10:43:48.042466] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.510 [2024-11-20 10:43:48.042632] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.510 [2024-11-20 10:43:48.042800] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.510 [2024-11-20 10:43:48.042809] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.510 [2024-11-20 10:43:48.042816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.510 [2024-11-20 10:43:48.042823] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.510 [2024-11-20 10:43:48.054916] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.510 [2024-11-20 10:43:48.055338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.510 [2024-11-20 10:43:48.055386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.510 [2024-11-20 10:43:48.055411] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.510 [2024-11-20 10:43:48.055781] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.510 [2024-11-20 10:43:48.055942] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.510 [2024-11-20 10:43:48.055951] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.510 [2024-11-20 10:43:48.055957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.510 [2024-11-20 10:43:48.055963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.510 [2024-11-20 10:43:48.067721] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.510 [2024-11-20 10:43:48.068114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.510 [2024-11-20 10:43:48.068135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.510 [2024-11-20 10:43:48.068143] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.510 [2024-11-20 10:43:48.068326] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.510 [2024-11-20 10:43:48.068494] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.510 [2024-11-20 10:43:48.068504] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.510 [2024-11-20 10:43:48.068511] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.510 [2024-11-20 10:43:48.068518] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.510 [2024-11-20 10:43:48.080452] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.510 [2024-11-20 10:43:48.080865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.510 [2024-11-20 10:43:48.080882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.510 [2024-11-20 10:43:48.080890] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.510 [2024-11-20 10:43:48.081048] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.510 [2024-11-20 10:43:48.081214] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.510 [2024-11-20 10:43:48.081224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.510 [2024-11-20 10:43:48.081230] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.510 [2024-11-20 10:43:48.081236] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.510 [2024-11-20 10:43:48.093230] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.510 [2024-11-20 10:43:48.093586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.510 [2024-11-20 10:43:48.093618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.510 [2024-11-20 10:43:48.093643] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.510 [2024-11-20 10:43:48.094235] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.510 [2024-11-20 10:43:48.094739] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.510 [2024-11-20 10:43:48.094749] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.510 [2024-11-20 10:43:48.094755] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.510 [2024-11-20 10:43:48.094762] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.510 [2024-11-20 10:43:48.106023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.510 [2024-11-20 10:43:48.106348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.510 [2024-11-20 10:43:48.106393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.510 [2024-11-20 10:43:48.106417] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.511 [2024-11-20 10:43:48.106903] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.511 [2024-11-20 10:43:48.107064] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.511 [2024-11-20 10:43:48.107073] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.511 [2024-11-20 10:43:48.107079] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.511 [2024-11-20 10:43:48.107085] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.511 [2024-11-20 10:43:48.118758] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.511 [2024-11-20 10:43:48.119106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.511 [2024-11-20 10:43:48.119123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.511 [2024-11-20 10:43:48.119130] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.511 [2024-11-20 10:43:48.119313] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.511 [2024-11-20 10:43:48.119480] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.511 [2024-11-20 10:43:48.119490] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.511 [2024-11-20 10:43:48.119496] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.511 [2024-11-20 10:43:48.119503] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.511 [2024-11-20 10:43:48.131699] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.511 [2024-11-20 10:43:48.132014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.511 [2024-11-20 10:43:48.132031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.511 [2024-11-20 10:43:48.132038] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.511 [2024-11-20 10:43:48.132196] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.511 [2024-11-20 10:43:48.132360] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.511 [2024-11-20 10:43:48.132370] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.511 [2024-11-20 10:43:48.132377] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.511 [2024-11-20 10:43:48.132383] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.511 [2024-11-20 10:43:48.144569] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.511 [2024-11-20 10:43:48.144963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.511 [2024-11-20 10:43:48.144981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.511 [2024-11-20 10:43:48.144989] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.511 [2024-11-20 10:43:48.145155] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.511 [2024-11-20 10:43:48.145327] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.511 [2024-11-20 10:43:48.145341] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.511 [2024-11-20 10:43:48.145348] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.511 [2024-11-20 10:43:48.145354] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.511 [2024-11-20 10:43:48.157351] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.511 [2024-11-20 10:43:48.157709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.511 [2024-11-20 10:43:48.157726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.511 [2024-11-20 10:43:48.157734] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.511 [2024-11-20 10:43:48.157901] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.511 [2024-11-20 10:43:48.158068] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.511 [2024-11-20 10:43:48.158078] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.511 [2024-11-20 10:43:48.158084] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.511 [2024-11-20 10:43:48.158091] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.511 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 3357329 Killed "${NVMF_APP[@]}" "$@" 00:26:07.511 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:26:07.511 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:07.511 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:07.511 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:07.511 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:07.511 [2024-11-20 10:43:48.170423] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.511 [2024-11-20 10:43:48.170850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.511 [2024-11-20 10:43:48.170867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.511 [2024-11-20 10:43:48.170875] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.511 [2024-11-20 10:43:48.171047] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.511 [2024-11-20 10:43:48.171225] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.511 [2024-11-20 10:43:48.171236] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.511 [2024-11-20 10:43:48.171243] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.511 [2024-11-20 10:43:48.171250] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.511 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@328 -- # nvmfpid=3358529 00:26:07.511 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # waitforlisten 3358529 00:26:07.511 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:07.511 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 3358529 ']' 00:26:07.511 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:07.511 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:07.511 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:07.511 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:07.511 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:07.511 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:07.511 [2024-11-20 10:43:48.183368] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.511 [2024-11-20 10:43:48.183720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.511 [2024-11-20 10:43:48.183738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.511 [2024-11-20 10:43:48.183746] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.511 [2024-11-20 10:43:48.183917] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.511 [2024-11-20 10:43:48.184090] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.511 [2024-11-20 10:43:48.184100] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.511 [2024-11-20 10:43:48.184107] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.511 [2024-11-20 10:43:48.184114] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.511 [2024-11-20 10:43:48.196440] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.511 [2024-11-20 10:43:48.196800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.511 [2024-11-20 10:43:48.196817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.511 [2024-11-20 10:43:48.196824] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.511 [2024-11-20 10:43:48.196995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.511 [2024-11-20 10:43:48.197167] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.511 [2024-11-20 10:43:48.197176] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.511 [2024-11-20 10:43:48.197183] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.511 [2024-11-20 10:43:48.197189] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.511 [2024-11-20 10:43:48.209454] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.511 [2024-11-20 10:43:48.209853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.511 [2024-11-20 10:43:48.209870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.511 [2024-11-20 10:43:48.209878] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.511 [2024-11-20 10:43:48.210045] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.511 [2024-11-20 10:43:48.210227] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.511 [2024-11-20 10:43:48.210254] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.511 [2024-11-20 10:43:48.210262] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.511 [2024-11-20 10:43:48.210272] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.512 [2024-11-20 10:43:48.220685] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:26:07.512 [2024-11-20 10:43:48.220723] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:07.512 [2024-11-20 10:43:48.222391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.512 [2024-11-20 10:43:48.222763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.512 [2024-11-20 10:43:48.222781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.512 [2024-11-20 10:43:48.222789] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.512 [2024-11-20 10:43:48.222957] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.512 [2024-11-20 10:43:48.223126] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.512 [2024-11-20 10:43:48.223137] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.512 [2024-11-20 10:43:48.223145] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.512 [2024-11-20 10:43:48.223152] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.512 [2024-11-20 10:43:48.235399] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.512 [2024-11-20 10:43:48.235751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.512 [2024-11-20 10:43:48.235774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.512 [2024-11-20 10:43:48.235782] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.770 [2024-11-20 10:43:48.235955] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.770 [2024-11-20 10:43:48.236129] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.770 [2024-11-20 10:43:48.236138] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.770 [2024-11-20 10:43:48.236146] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.770 [2024-11-20 10:43:48.236153] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.770 [2024-11-20 10:43:48.248435] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.770 [2024-11-20 10:43:48.248770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.770 [2024-11-20 10:43:48.248789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.770 [2024-11-20 10:43:48.248797] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.770 [2024-11-20 10:43:48.248966] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.770 [2024-11-20 10:43:48.249135] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.770 [2024-11-20 10:43:48.249146] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.770 [2024-11-20 10:43:48.249158] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.770 [2024-11-20 10:43:48.249166] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.770 [2024-11-20 10:43:48.261468] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.770 [2024-11-20 10:43:48.261809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.770 [2024-11-20 10:43:48.261827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.770 [2024-11-20 10:43:48.261835] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.770 [2024-11-20 10:43:48.262007] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.770 [2024-11-20 10:43:48.262179] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.770 [2024-11-20 10:43:48.262189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.770 [2024-11-20 10:43:48.262196] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.770 [2024-11-20 10:43:48.262218] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.770 [2024-11-20 10:43:48.274538] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.770 [2024-11-20 10:43:48.274829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.770 [2024-11-20 10:43:48.274848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.770 [2024-11-20 10:43:48.274856] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.770 [2024-11-20 10:43:48.275028] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.770 [2024-11-20 10:43:48.275208] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.770 [2024-11-20 10:43:48.275218] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.770 [2024-11-20 10:43:48.275225] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.770 [2024-11-20 10:43:48.275232] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.770 [2024-11-20 10:43:48.287506] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.770 [2024-11-20 10:43:48.287863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.770 [2024-11-20 10:43:48.287881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.770 [2024-11-20 10:43:48.287889] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.770 [2024-11-20 10:43:48.288061] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.770 [2024-11-20 10:43:48.288240] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.771 [2024-11-20 10:43:48.288251] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.771 [2024-11-20 10:43:48.288259] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.771 [2024-11-20 10:43:48.288266] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.771 [2024-11-20 10:43:48.299774] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:07.771 [2024-11-20 10:43:48.300537] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.771 [2024-11-20 10:43:48.300820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.771 [2024-11-20 10:43:48.300839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.771 [2024-11-20 10:43:48.300847] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.771 [2024-11-20 10:43:48.301019] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.771 [2024-11-20 10:43:48.301193] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.771 [2024-11-20 10:43:48.301209] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.771 [2024-11-20 10:43:48.301216] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.771 [2024-11-20 10:43:48.301224] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.771 [2024-11-20 10:43:48.313565] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.771 [2024-11-20 10:43:48.313964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.771 [2024-11-20 10:43:48.313984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.771 [2024-11-20 10:43:48.313992] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.771 [2024-11-20 10:43:48.314160] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.771 [2024-11-20 10:43:48.314333] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.771 [2024-11-20 10:43:48.314345] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.771 [2024-11-20 10:43:48.314352] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.771 [2024-11-20 10:43:48.314359] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.771 [2024-11-20 10:43:48.326475] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.771 [2024-11-20 10:43:48.326829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.771 [2024-11-20 10:43:48.326847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.771 [2024-11-20 10:43:48.326855] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.771 [2024-11-20 10:43:48.327022] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.771 [2024-11-20 10:43:48.327189] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.771 [2024-11-20 10:43:48.327199] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.771 [2024-11-20 10:43:48.327211] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.771 [2024-11-20 10:43:48.327217] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.771 [2024-11-20 10:43:48.339516] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.771 [2024-11-20 10:43:48.339842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.771 [2024-11-20 10:43:48.339859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.771 [2024-11-20 10:43:48.339871] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.771 [2024-11-20 10:43:48.340039] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.771 [2024-11-20 10:43:48.340213] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.771 [2024-11-20 10:43:48.340224] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.771 [2024-11-20 10:43:48.340248] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.771 [2024-11-20 10:43:48.340256] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.771 [2024-11-20 10:43:48.341593] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:07.771 [2024-11-20 10:43:48.341622] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:07.771 [2024-11-20 10:43:48.341629] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:07.771 [2024-11-20 10:43:48.341636] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:07.771 [2024-11-20 10:43:48.341643] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:07.771 [2024-11-20 10:43:48.343041] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:26:07.771 [2024-11-20 10:43:48.343154] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:07.771 [2024-11-20 10:43:48.343156] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:26:07.771 [2024-11-20 10:43:48.352536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.771 [2024-11-20 10:43:48.352913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.771 [2024-11-20 10:43:48.352933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.771 [2024-11-20 10:43:48.352942] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.771 [2024-11-20 10:43:48.353117] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.771 [2024-11-20 10:43:48.353298] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.771 [2024-11-20 10:43:48.353309] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.771 [2024-11-20 10:43:48.353317] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.771 [2024-11-20 10:43:48.353324] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.771 [2024-11-20 10:43:48.365602] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.771 [2024-11-20 10:43:48.366031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.771 [2024-11-20 10:43:48.366052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.771 [2024-11-20 10:43:48.366062] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.771 [2024-11-20 10:43:48.366241] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.771 [2024-11-20 10:43:48.366419] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.771 [2024-11-20 10:43:48.366430] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.771 [2024-11-20 10:43:48.366445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.771 [2024-11-20 10:43:48.366453] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.771 [2024-11-20 10:43:48.378578] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.771 [2024-11-20 10:43:48.378999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.771 [2024-11-20 10:43:48.379020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.771 [2024-11-20 10:43:48.379030] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.771 [2024-11-20 10:43:48.379211] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.771 [2024-11-20 10:43:48.379387] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.771 [2024-11-20 10:43:48.379399] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.771 [2024-11-20 10:43:48.379406] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.771 [2024-11-20 10:43:48.379414] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.771 [2024-11-20 10:43:48.391556] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.771 [2024-11-20 10:43:48.392012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.771 [2024-11-20 10:43:48.392032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.771 [2024-11-20 10:43:48.392042] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.771 [2024-11-20 10:43:48.392219] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.771 [2024-11-20 10:43:48.392394] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.771 [2024-11-20 10:43:48.392405] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.771 [2024-11-20 10:43:48.392413] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.771 [2024-11-20 10:43:48.392420] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.771 [2024-11-20 10:43:48.404536] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.771 [2024-11-20 10:43:48.404926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.771 [2024-11-20 10:43:48.404946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.771 [2024-11-20 10:43:48.404956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.771 [2024-11-20 10:43:48.405128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.771 [2024-11-20 10:43:48.405308] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.771 [2024-11-20 10:43:48.405339] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.772 [2024-11-20 10:43:48.405350] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.772 [2024-11-20 10:43:48.405361] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.772 [2024-11-20 10:43:48.417498] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.772 [2024-11-20 10:43:48.417994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.772 [2024-11-20 10:43:48.418012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.772 [2024-11-20 10:43:48.418020] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.772 [2024-11-20 10:43:48.418192] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.772 [2024-11-20 10:43:48.418373] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.772 [2024-11-20 10:43:48.418384] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.772 [2024-11-20 10:43:48.418390] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.772 [2024-11-20 10:43:48.418397] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.772 [2024-11-20 10:43:48.430497] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.772 [2024-11-20 10:43:48.430893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.772 [2024-11-20 10:43:48.430911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.772 [2024-11-20 10:43:48.430920] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.772 [2024-11-20 10:43:48.431092] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.772 [2024-11-20 10:43:48.431270] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.772 [2024-11-20 10:43:48.431282] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.772 [2024-11-20 10:43:48.431288] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.772 [2024-11-20 10:43:48.431295] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.772 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:07.772 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:26:07.772 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:07.772 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:07.772 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:07.772 [2024-11-20 10:43:48.443584] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.772 [2024-11-20 10:43:48.444042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.772 [2024-11-20 10:43:48.444061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.772 [2024-11-20 10:43:48.444070] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.772 [2024-11-20 10:43:48.444247] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.772 [2024-11-20 10:43:48.444421] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.772 [2024-11-20 10:43:48.444432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.772 [2024-11-20 10:43:48.444439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.772 [2024-11-20 10:43:48.444446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.772 [2024-11-20 10:43:48.456567] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.772 [2024-11-20 10:43:48.457045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.772 [2024-11-20 10:43:48.457064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.772 [2024-11-20 10:43:48.457072] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.772 [2024-11-20 10:43:48.457249] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.772 [2024-11-20 10:43:48.457422] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.772 [2024-11-20 10:43:48.457432] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.772 [2024-11-20 10:43:48.457439] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.772 [2024-11-20 10:43:48.457446] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.772 [2024-11-20 10:43:48.469555] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.772 [2024-11-20 10:43:48.469878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.772 [2024-11-20 10:43:48.469896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.772 [2024-11-20 10:43:48.469905] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.772 [2024-11-20 10:43:48.470076] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.772 [2024-11-20 10:43:48.470254] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.772 [2024-11-20 10:43:48.470264] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.772 [2024-11-20 10:43:48.470271] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.772 [2024-11-20 10:43:48.470278] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.772 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:07.772 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:07.772 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.772 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:07.772 [2024-11-20 10:43:48.479307] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:07.772 [2024-11-20 10:43:48.482551] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.772 [2024-11-20 10:43:48.482929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.772 [2024-11-20 10:43:48.482947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.772 [2024-11-20 10:43:48.482956] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.772 [2024-11-20 10:43:48.483128] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.772 [2024-11-20 10:43:48.483305] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.772 [2024-11-20 10:43:48.483316] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.772 [2024-11-20 10:43:48.483327] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.772 [2024-11-20 10:43:48.483335] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:07.772 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:07.772 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:07.772 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:07.772 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:07.772 [2024-11-20 10:43:48.495676] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:07.772 [2024-11-20 10:43:48.496077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:07.772 [2024-11-20 10:43:48.496099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:07.772 [2024-11-20 10:43:48.496109] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:07.772 [2024-11-20 10:43:48.496286] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:07.772 [2024-11-20 10:43:48.496481] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:07.772 [2024-11-20 10:43:48.496495] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:07.772 [2024-11-20 10:43:48.496503] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:07.772 [2024-11-20 10:43:48.496510] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:08.031 [2024-11-20 10:43:48.508654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:08.031 [2024-11-20 10:43:48.509059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.031 [2024-11-20 10:43:48.509079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:08.031 [2024-11-20 10:43:48.509088] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:08.031 [2024-11-20 10:43:48.509267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:08.031 [2024-11-20 10:43:48.509440] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:08.031 [2024-11-20 10:43:48.509450] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:08.031 [2024-11-20 10:43:48.509457] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:08.031 [2024-11-20 10:43:48.509463] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:08.031 Malloc0 00:26:08.031 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.031 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:08.031 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.031 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:08.031 [2024-11-20 10:43:48.521747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:08.031 [2024-11-20 10:43:48.522091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.031 [2024-11-20 10:43:48.522110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:08.031 [2024-11-20 10:43:48.522118] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:08.031 [2024-11-20 10:43:48.522300] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:08.031 [2024-11-20 10:43:48.522473] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:08.031 [2024-11-20 10:43:48.522483] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:08.031 [2024-11-20 10:43:48.522491] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:08.031 [2024-11-20 10:43:48.522498] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:08.031 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.031 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:08.031 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.031 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:08.031 [2024-11-20 10:43:48.534805] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:08.031 [2024-11-20 10:43:48.535147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:08.031 [2024-11-20 10:43:48.535165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xe5c500 with addr=10.0.0.2, port=4420 00:26:08.031 [2024-11-20 10:43:48.535173] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xe5c500 is same with the state(6) to be set 00:26:08.031 [2024-11-20 10:43:48.535351] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe5c500 (9): Bad file descriptor 00:26:08.031 [2024-11-20 10:43:48.535524] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Ctrlr is in error state 00:26:08.031 [2024-11-20 10:43:48.535534] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] controller reinitialization failed 00:26:08.031 [2024-11-20 10:43:48.535541] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] in failed state. 00:26:08.031 [2024-11-20 10:43:48.535548] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller failed. 00:26:08.031 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.031 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:08.031 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:08.031 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:08.031 [2024-11-20 10:43:48.547077] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:08.031 [2024-11-20 10:43:48.547833] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] resetting controller 00:26:08.031 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:08.031 10:43:48 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 3357590 00:26:08.031 [2024-11-20 10:43:48.569838] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] Resetting controller successful. 00:26:08.963 4935.33 IOPS, 19.28 MiB/s [2024-11-20T09:43:51.062Z] 5849.86 IOPS, 22.85 MiB/s [2024-11-20T09:43:51.994Z] 6559.00 IOPS, 25.62 MiB/s [2024-11-20T09:43:52.927Z] 7106.67 IOPS, 27.76 MiB/s [2024-11-20T09:43:53.858Z] 7546.70 IOPS, 29.48 MiB/s [2024-11-20T09:43:54.791Z] 7895.82 IOPS, 30.84 MiB/s [2024-11-20T09:43:55.724Z] 8186.58 IOPS, 31.98 MiB/s [2024-11-20T09:43:57.097Z] 8439.00 IOPS, 32.96 MiB/s [2024-11-20T09:43:58.030Z] 8660.14 IOPS, 33.83 MiB/s [2024-11-20T09:43:58.030Z] 8842.53 IOPS, 34.54 MiB/s 00:26:17.299 Latency(us) 00:26:17.299 [2024-11-20T09:43:58.030Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.299 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:17.299 Verification LBA range: start 0x0 length 0x4000 00:26:17.299 Nvme1n1 : 15.05 8816.43 34.44 10927.61 0.00 6446.19 442.76 41693.38 00:26:17.299 [2024-11-20T09:43:58.030Z] =================================================================================================================== 00:26:17.299 [2024-11-20T09:43:58.030Z] Total : 8816.43 34.44 10927.61 0.00 6446.19 442.76 41693.38 00:26:17.299 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:26:17.299 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:26:17.299 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:17.299 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:17.299 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:17.299 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:26:17.299 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:26:17.299 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:17.299 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@99 -- # sync 00:26:17.299 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:17.299 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@102 -- # set +e 00:26:17.299 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:17.300 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:17.300 rmmod nvme_tcp 00:26:17.300 rmmod nvme_fabrics 00:26:17.300 rmmod nvme_keyring 00:26:17.300 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:17.300 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@106 -- # set -e 00:26:17.300 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@107 -- # return 0 00:26:17.300 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@336 -- # '[' -n 3358529 ']' 00:26:17.300 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@337 -- # killprocess 3358529 00:26:17.300 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 3358529 ']' 00:26:17.300 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # kill -0 3358529 00:26:17.300 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # uname 00:26:17.300 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:17.300 10:43:57 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3358529 00:26:17.300 10:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:17.300 10:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:17.300 10:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3358529' 00:26:17.300 killing process with pid 3358529 00:26:17.300 10:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@973 -- # kill 3358529 00:26:17.300 10:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@978 -- # wait 3358529 00:26:17.558 10:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:17.558 10:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # nvmf_fini 00:26:17.558 10:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@264 -- # local dev 00:26:17.558 10:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@267 -- # remove_target_ns 00:26:17.558 10:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:17.558 10:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:17.558 10:43:58 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@268 -- # delete_main_bridge 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@130 -- # return 0 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@41 -- # _dev=0 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@41 -- # dev_map=() 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/setup.sh@284 -- # iptr 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@542 -- # iptables-save 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@542 -- # iptables-restore 00:26:19.689 00:26:19.689 real 0m26.279s 00:26:19.689 user 1m0.858s 00:26:19.689 sys 0m6.938s 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:19.689 ************************************ 00:26:19.689 END TEST nvmf_bdevperf 00:26:19.689 ************************************ 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:19.689 ************************************ 00:26:19.689 START TEST nvmf_target_disconnect 00:26:19.689 ************************************ 00:26:19.689 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:26:19.948 * Looking for test storage... 00:26:19.948 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lcov --version 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # IFS=.-: 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@336 -- # read -ra ver1 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # IFS=.-: 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@337 -- # read -ra ver2 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@338 -- # local 'op=<' 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@340 -- # ver1_l=2 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@341 -- # ver2_l=1 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@344 -- # case "$op" in 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@345 -- # : 1 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # decimal 1 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=1 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 1 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@365 -- # ver1[v]=1 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # decimal 2 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@353 -- # local d=2 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@355 -- # echo 2 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@366 -- # ver2[v]=2 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@368 -- # return 0 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:19.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.948 --rc genhtml_branch_coverage=1 00:26:19.948 --rc genhtml_function_coverage=1 00:26:19.948 --rc genhtml_legend=1 00:26:19.948 --rc geninfo_all_blocks=1 00:26:19.948 --rc geninfo_unexecuted_blocks=1 00:26:19.948 00:26:19.948 ' 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:19.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.948 --rc genhtml_branch_coverage=1 00:26:19.948 --rc genhtml_function_coverage=1 00:26:19.948 --rc genhtml_legend=1 00:26:19.948 --rc geninfo_all_blocks=1 00:26:19.948 --rc geninfo_unexecuted_blocks=1 00:26:19.948 00:26:19.948 ' 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:19.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.948 --rc genhtml_branch_coverage=1 00:26:19.948 --rc genhtml_function_coverage=1 00:26:19.948 --rc genhtml_legend=1 00:26:19.948 --rc geninfo_all_blocks=1 00:26:19.948 --rc geninfo_unexecuted_blocks=1 00:26:19.948 00:26:19.948 ' 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:19.948 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:19.948 --rc genhtml_branch_coverage=1 00:26:19.948 --rc genhtml_function_coverage=1 00:26:19.948 --rc genhtml_legend=1 00:26:19.948 --rc geninfo_all_blocks=1 00:26:19.948 --rc geninfo_unexecuted_blocks=1 00:26:19.948 00:26:19.948 ' 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@15 -- # shopt -s extglob 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.948 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@50 -- # : 0 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:19.949 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # remove_target_ns 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # xtrace_disable 00:26:19.949 10:44:00 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@131 -- # pci_devs=() 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@135 -- # net_devs=() 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@136 -- # e810=() 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@136 -- # local -ga e810 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@137 -- # x722=() 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@137 -- # local -ga x722 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@138 -- # mlx=() 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@138 -- # local -ga mlx 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:26.517 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:26.517 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:26.517 Found net devices under 0000:86:00.0: cvl_0_0 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:26.517 Found net devices under 0000:86:00.1: cvl_0_1 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@262 -- # is_hw=yes 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:26.517 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@257 -- # create_target_ns 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@28 -- # local -g _dev 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@44 -- # ips=() 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@11 -- # local val=167772161 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:26.518 10.0.0.1 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@11 -- # local val=167772162 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:26.518 10.0.0.2 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:26.518 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:26.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:26.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.370 ms 00:26:26.519 00:26:26.519 --- 10.0.0.1 ping statistics --- 00:26:26.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.519 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:26:26.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:26.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.116 ms 00:26:26.519 00:26:26.519 --- 10.0.0.2 ping statistics --- 00:26:26.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:26.519 rtt min/avg/max/mdev = 0.116/0.116/0.116/0.000 ms 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair++ )) 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # return 0 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=initiator1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # return 1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev= 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@169 -- # return 0 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=target0 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # get_net_dev target1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@107 -- # local dev=target1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@109 -- # return 1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@168 -- # dev= 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@169 -- # return 0 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:26.519 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:26.519 ************************************ 00:26:26.519 START TEST nvmf_target_disconnect_tc1 00:26:26.519 ************************************ 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc1 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@652 -- # local es=0 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:26.520 [2024-11-20 10:44:06.727961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:26.520 [2024-11-20 10:44:06.728007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1db7ab0 with addr=10.0.0.2, port=4420 00:26:26.520 [2024-11-20 10:44:06.728024] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:26:26.520 [2024-11-20 10:44:06.728033] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:26:26.520 [2024-11-20 10:44:06.728040] nvme.c: 939:spdk_nvme_probe_ext: *ERROR*: Create probe context failed 00:26:26.520 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:26:26.520 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:26:26.520 Initializing NVMe Controllers 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@655 -- # es=1 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:26:26.520 00:26:26.520 real 0m0.117s 00:26:26.520 user 0m0.050s 00:26:26.520 sys 0m0.066s 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:26:26.520 ************************************ 00:26:26.520 END TEST nvmf_target_disconnect_tc1 00:26:26.520 ************************************ 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:26.520 ************************************ 00:26:26.520 START TEST nvmf_target_disconnect_tc2 00:26:26.520 ************************************ 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1129 -- # nvmf_target_disconnect_tc2 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@328 -- # nvmfpid=3363716 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@329 -- # waitforlisten 3363716 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3363716 ']' 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:26.520 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:26.520 10:44:06 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.520 [2024-11-20 10:44:06.867895] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:26:26.520 [2024-11-20 10:44:06.867935] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:26.520 [2024-11-20 10:44:06.948801] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:26.520 [2024-11-20 10:44:06.989951] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:26.520 [2024-11-20 10:44:06.989991] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:26.520 [2024-11-20 10:44:06.989998] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:26.520 [2024-11-20 10:44:06.990004] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:26.520 [2024-11-20 10:44:06.990009] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:26.520 [2024-11-20 10:44:06.991510] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:26.520 [2024-11-20 10:44:06.991623] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:26.520 [2024-11-20 10:44:06.991730] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:26.520 [2024-11-20 10:44:06.991731] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.520 Malloc0 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.520 [2024-11-20 10:44:07.165305] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.520 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.521 [2024-11-20 10:44:07.197600] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:26.521 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.521 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:26.521 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:26.521 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:26.521 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:26.521 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=3363905 00:26:26.521 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:26:26.521 10:44:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:26:29.079 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 3363716 00:26:29.079 10:44:09 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Write completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Write completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Write completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Write completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Write completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Write completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Write completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Write completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Write completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Write completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Write completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Write completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 [2024-11-20 10:44:09.226154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Write completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Write completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Read completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Write completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.079 Write completed with error (sct=0, sc=8) 00:26:29.079 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 [2024-11-20 10:44:09.226358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 3 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 [2024-11-20 10:44:09.226551] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 2 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Write completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.080 Read completed with error (sct=0, sc=8) 00:26:29.080 starting I/O failed 00:26:29.081 Read completed with error (sct=0, sc=8) 00:26:29.081 starting I/O failed 00:26:29.081 Write completed with error (sct=0, sc=8) 00:26:29.081 starting I/O failed 00:26:29.081 [2024-11-20 10:44:09.226738] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:29.081 [2024-11-20 10:44:09.226986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.227014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.227206] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.227218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.227389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.227423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.227619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.227653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.227792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.227824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.227962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.227987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.228086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.228110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.228236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.228260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.228376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.228417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.228553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.228586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.228701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.228733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.228912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.228946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.229064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.229089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.229267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.229293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.229514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.229538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.229734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.229760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.229862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.229886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.229984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.230006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.230098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.230120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.230241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.230266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.230380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.230404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.230515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.230540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.230707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.230740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.230863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.230895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.231025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.231057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.231165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.231199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.231408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.231436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.231594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.231618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.231721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.231746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.231909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.231932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.232022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.232044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.081 [2024-11-20 10:44:09.232130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.081 [2024-11-20 10:44:09.232152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.081 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.232242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.232268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.232368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.232392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.232487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.232512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.232612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.232635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.232806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.232831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.233008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.233032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.233133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.233157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.233345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.233370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.233538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.233563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.233694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.233718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.233910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.233936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.234047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.234071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.234251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.234277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.234386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.234410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.234570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.234594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.234815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.234839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.235007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.235031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.235286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.235311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.235413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.235438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.235613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.235645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.235778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.235811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.235973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.236034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.236273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.236346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.236559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.236593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.236717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.236751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.236923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.236947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.237168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.237214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.237399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.237433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.237567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.237600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.237803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.237836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.238007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.238040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.238261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.238296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.238577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.238611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.238885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.082 [2024-11-20 10:44:09.238918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.082 qpair failed and we were unable to recover it. 00:26:29.082 [2024-11-20 10:44:09.239101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.239142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.239366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.239400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.239529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.239562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.239807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.239840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.239967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.240000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.240266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.240301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.240491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.240524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.240656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.240689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.240930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.240962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.241135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.241168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.241441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.241476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.241656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.241689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.241894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.241926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.242052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.242084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.242289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.242324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.242509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.242541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.242724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.242757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.242892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.242925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.243034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.243068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.243184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.243239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.243480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.243513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.243714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.243748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.243868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.243901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.244091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.244123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.244296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.244330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.244514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.244546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.244654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.244686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.244893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.244927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.245191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.245234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.245416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.245449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.245687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.245720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.083 [2024-11-20 10:44:09.245927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.083 [2024-11-20 10:44:09.245959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.083 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.246081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.246114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.246351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.246386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.246569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.246600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.246818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.246852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.246982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.247014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.247132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.247165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.247467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.247502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.247742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.247775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.247910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.247948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.248130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.248163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.248371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.248406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.248580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.248613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.248739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.248772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.248904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.248937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.249062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.249095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.249277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.249312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.249496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.249529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.249643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.249676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.249886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.249919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.250058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.250091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.250298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.250332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.250449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.250482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.250676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.250709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.250892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.250925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.251117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.251151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.251279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.251312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.251532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.251566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.251753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.251786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.084 qpair failed and we were unable to recover it. 00:26:29.084 [2024-11-20 10:44:09.251978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.084 [2024-11-20 10:44:09.252010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.252251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.252286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.252523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.252557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.252751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.252784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.252910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.252943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.253217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.253251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.253440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.253473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.253710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.253783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.253993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.254030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.254231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.254268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.254408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.254441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.254621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.254654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.254835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.254868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.255110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.255142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.255277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.255310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.255492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.255525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.255708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.255741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.255985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.256018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.256144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.256177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.256362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.256396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.256567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.256600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.256736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.256769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.256957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.256991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.085 [2024-11-20 10:44:09.257250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.085 [2024-11-20 10:44:09.257285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.085 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.257545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.257577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.257769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.257802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.257978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.258012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.258125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.258158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.258305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.258340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.258519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.258552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.258728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.258760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.258895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.258928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.259053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.259086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.259214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.259249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.259421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.259460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.259668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.259700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.259964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.259998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.260128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.260161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.260302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.260335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.260454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.260486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.260754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.260786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.260900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.260932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.261123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.261156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.261270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.261304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.261419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.261452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.261627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.261661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.261783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.261816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.261918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.261951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.262131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.262164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.262357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.262393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.262563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.262597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.262723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.262757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.262870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.262903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.263020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.263052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.263311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.263345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.263535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.086 [2024-11-20 10:44:09.263568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.086 qpair failed and we were unable to recover it. 00:26:29.086 [2024-11-20 10:44:09.263686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.263718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.263907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.263939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.264142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.264174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.264360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.264393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.264583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.264615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.264799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.264831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.265022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.265055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.265161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.265193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.265395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.265427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.265632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.265665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.265785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.265817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.265984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.266016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.266191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.266231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.266404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.266438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.266580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.266612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.266715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.266747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.266988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.267021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.267213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.267246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.267428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.267461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.267775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.267847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.268174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.268228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.268439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.268474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.268604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.268637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.268756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.268790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.269063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.269097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.269283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.269318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.269526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.269560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.269823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.269855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.270048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.270080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.270254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.270289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.270474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.270507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.270783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.270817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.270941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.270983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.087 [2024-11-20 10:44:09.271227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.087 [2024-11-20 10:44:09.271261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.087 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.271464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.271496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.271686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.271719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.271856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.271887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.272072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.272105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.272237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.272272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.272557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.272590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.272803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.272835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.273008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.273041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.273223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.273256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.273371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.273405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.273615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.273649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.273933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.273966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.274218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.274252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.274462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.274496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.274687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.274719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.274907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.274940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.275164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.275197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.275390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.275423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.275704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.275737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.275930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.275965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.276094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.276126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.276304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.276338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.276518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.276552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.276741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.276773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.277032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.277065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.277288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.277324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.277497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.277531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.277792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.277825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.278025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.278057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.278175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.278219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.088 [2024-11-20 10:44:09.278393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.088 [2024-11-20 10:44:09.278425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.088 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.278536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.278570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.278682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.278715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.278910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.278944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.279047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.279080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.279274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.279308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.279546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.279579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.279782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.279815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.280059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.280098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.280227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.280261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.280452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.280485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.280620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.280653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.280759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.280792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.280977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.281010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.281240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.281275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.281490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.281522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.281713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.281746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.281873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.281906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.282112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.282144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.282264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.282299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.282498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.282531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.282640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.282673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.282951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.282984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.283272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.283307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.283517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.283550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.283738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.283771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.283958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.283990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.284171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.284211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.284334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.284367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.284610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.284643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.284819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.284851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.284959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.089 [2024-11-20 10:44:09.284993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.089 qpair failed and we were unable to recover it. 00:26:29.089 [2024-11-20 10:44:09.285260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.285294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.285418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.285451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.285634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.285668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.285860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.285898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.286018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.286051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.286173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.286216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.286346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.286380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.286583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.286616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.286738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.286771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.286958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.286991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.287162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.287194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.287380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.287413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.287680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.287713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.287889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.287921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.288031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.288064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.288251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.288285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.288486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.288519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.288707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.288741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.288917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.288950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.289160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.289193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.289441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.289475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.289682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.289714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.289839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.289872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.290057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.290091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.290290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.290323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.290504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.090 [2024-11-20 10:44:09.290536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.090 qpair failed and we were unable to recover it. 00:26:29.090 [2024-11-20 10:44:09.290731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.290764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.290948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.290980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.291221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.291254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.291360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.291394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.291532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.291565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.291686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.291720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.291906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.291939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.292138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.292171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.292322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.292357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.292542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.292575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.292766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.292799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.293009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.293042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.293311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.293345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.293535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.293568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.293806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.293838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.293961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.293994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.294117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.294150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.294283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.294322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.294577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.294609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.294727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.294759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.294951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.294984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.295187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.295230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.295350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.295382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.295512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.295546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.295784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.295817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.295986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.296019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.296220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.296255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.296532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.296565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.296747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.296780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.296966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.296999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.297189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.091 [2024-11-20 10:44:09.297246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.091 qpair failed and we were unable to recover it. 00:26:29.091 [2024-11-20 10:44:09.297423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.297457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.297637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.297670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.297909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.297942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.298120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.298152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.298290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.298324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.298442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.298474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.298663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.298696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.298870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.298902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.299030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.299063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.299237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.299273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.299407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.299439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.299572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.299605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.299735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.299769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.299893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.299927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.300051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.300084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.300298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.300333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.300504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.300538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.300728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.300760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.300958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.300991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.301181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.301239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.301427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.301460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.301658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.301691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.301818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.301851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.301958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.301990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.302230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.302265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.302450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.302483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.302660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.302698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.302979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.303012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.303277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.303312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.303491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.303524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.092 qpair failed and we were unable to recover it. 00:26:29.092 [2024-11-20 10:44:09.303717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.092 [2024-11-20 10:44:09.303750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.303880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.303913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.304096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.304129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.304261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.304295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.304533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.304566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.304834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.304867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.305057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.305091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.305313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.305347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.305521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.305555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.305746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.305779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.305907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.305940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.306049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.306082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.306320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.306355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.306474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.306507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.306769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.306802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.307004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.307036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.307277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.307312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.307426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.307459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.307630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.307663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.307792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.307825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.307931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.307963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.308232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.308267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.308510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.308542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.308730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.308762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.308884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.308917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.309174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.309296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.309521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.309555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.309737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.309770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.310007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.310040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.310158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.310191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.310311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.310344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.310583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.310616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.310726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.310760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.310864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.310896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.093 [2024-11-20 10:44:09.311077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.093 [2024-11-20 10:44:09.311110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.093 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.311292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.311327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.311451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.311491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.311678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.311711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.311949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.311982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.312154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.312187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.312314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.312347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.312474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.312507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.312684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.312717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.312908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.312940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.313122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.313155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.313298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.313332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.313609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.313642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.313880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.313912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.314048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.314081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.314337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.314372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.314592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.314625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.314844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.314878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.315138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.315172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.315474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.315508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.315685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.315719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.315843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.315875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.316106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.316139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.316403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.316437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.316620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.316653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.316921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.316954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.317060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.317094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.317347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.317382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.317498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.317531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.317720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.317754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.317948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.317981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.318162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.318195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.318413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.318446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.094 qpair failed and we were unable to recover it. 00:26:29.094 [2024-11-20 10:44:09.318654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.094 [2024-11-20 10:44:09.318687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.318806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.318839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.319027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.319061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.319324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.319359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.319546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.319578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.319692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.319726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.319842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.319874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.320054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.320087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.320209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.320244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.320372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.320418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.320591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.320623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.320725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.320758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.320874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.320907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.321080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.321113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.321328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.321363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.321494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.321527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.321706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.321739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.322023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.322056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.322265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.322301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.322475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.322508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.322718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.322750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.322940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.322974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.323092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.323124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.323321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.323357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.323529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.323562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.323787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.323820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.324026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.324059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.324266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.324302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.324578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.324611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.324849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.324883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.325083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.325116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.095 qpair failed and we were unable to recover it. 00:26:29.095 [2024-11-20 10:44:09.325302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.095 [2024-11-20 10:44:09.325338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.096 [2024-11-20 10:44:09.325457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.096 [2024-11-20 10:44:09.325490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.096 [2024-11-20 10:44:09.325661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.096 [2024-11-20 10:44:09.325695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.096 [2024-11-20 10:44:09.325899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.096 [2024-11-20 10:44:09.325933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.096 [2024-11-20 10:44:09.326133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.096 [2024-11-20 10:44:09.326166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.096 [2024-11-20 10:44:09.326445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.096 [2024-11-20 10:44:09.326479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.096 [2024-11-20 10:44:09.326651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.096 [2024-11-20 10:44:09.326684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.096 [2024-11-20 10:44:09.326802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.096 [2024-11-20 10:44:09.326836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.096 [2024-11-20 10:44:09.326949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.096 [2024-11-20 10:44:09.326982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.096 [2024-11-20 10:44:09.327117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.096 [2024-11-20 10:44:09.327150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.096 [2024-11-20 10:44:09.327280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.096 [2024-11-20 10:44:09.327315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.096 [2024-11-20 10:44:09.327423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.096 [2024-11-20 10:44:09.327456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.096 [2024-11-20 10:44:09.327634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.096 [2024-11-20 10:44:09.327667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.096 [2024-11-20 10:44:09.327908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.096 [2024-11-20 10:44:09.327943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.096 [2024-11-20 10:44:09.328048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.096 [2024-11-20 10:44:09.328081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.096 [2024-11-20 10:44:09.328286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.096 [2024-11-20 10:44:09.328321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.096 [2024-11-20 10:44:09.328428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.096 [2024-11-20 10:44:09.328460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.096 [2024-11-20 10:44:09.328590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.096 [2024-11-20 10:44:09.328623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.096 [2024-11-20 10:44:09.328875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.096 [2024-11-20 10:44:09.328914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.096 [2024-11-20 10:44:09.329098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.096 [2024-11-20 10:44:09.329132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.096 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.329371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.329407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.329635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.329668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.329780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.329814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.330050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.330082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.330336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.330371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.330656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.330690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.330814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.330847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.331060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.331093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.331359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.331394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.331578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.331610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.331804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.331837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.332026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.332059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.332327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.332361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.332664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.332698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.332887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.332920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.333049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.333083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.333274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.333309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.333432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.333465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.333650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.333682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.333901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.333934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.334117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.334150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.334353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.334387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.334563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.334596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.334835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.334868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.335057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.335089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.335294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.335329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.335499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.335532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.335730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.335763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.336007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.336040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.336227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.336262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.336395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.336428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.097 [2024-11-20 10:44:09.336602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.097 [2024-11-20 10:44:09.336636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.097 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.336875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.336908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.337171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.337234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.337417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.337451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.337635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.337668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.337856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.337889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.338068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.338100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.338336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.338377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.338494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.338527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.338711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.338743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.338959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.338992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.339231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.339265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.339455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.339487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.339609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.339643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.339786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.339819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.339941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.339975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.340182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.340225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.340343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.340376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.340556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.340588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.340760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.340793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.340897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.340930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.341065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.341099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.341340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.341375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.341490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.341524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.341642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.341674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.341877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.341909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.342096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.342129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.342337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.342371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.098 [2024-11-20 10:44:09.342509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.098 [2024-11-20 10:44:09.342543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.098 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.342659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.342693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.342880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.342913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.343158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.343191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.343339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.343373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.343556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.343589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.343772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.343805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.343924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.343958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.344144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.344177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.344360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.344392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.344563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.344596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.344831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.344864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.345052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.345085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.345192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.345253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.345422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.345456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.345637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.345669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.345875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.345908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.346088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.346120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.346308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.346343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.346460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.346498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.346679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.346713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.346839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.346872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.347143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.347176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.347426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.347460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.347565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.347599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.347833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.347866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.348069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.348103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.348278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.348312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.348510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.348544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.348652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.348685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.348868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.348902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.349019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.349052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.349292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.099 [2024-11-20 10:44:09.349326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.099 qpair failed and we were unable to recover it. 00:26:29.099 [2024-11-20 10:44:09.349625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.349659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.349795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.349828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.350039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.350072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.350222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.350256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.350541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.350575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.350689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.350721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.350828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.350861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.351075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.351108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.351298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.351332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.351596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.351629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.351810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.351844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.351974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.352007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.352130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.352162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.352293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.352328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.352519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.352552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.352764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.352798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.353047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.353079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.353291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.353325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.353440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.353472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.353745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.353777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.353948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.353981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.354224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.354258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.354438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.354471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.354727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.354760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.354893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.354926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.355116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.355149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.355330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.355370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.355483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.355516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.355636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.355669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.355855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.355887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.356014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.356047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.356170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.100 [2024-11-20 10:44:09.356213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.100 qpair failed and we were unable to recover it. 00:26:29.100 [2024-11-20 10:44:09.356429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.356463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.356703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.356736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.356865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.356898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.357032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.357066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.357272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.357307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.357427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.357461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.357594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.357628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.357741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.357773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.357901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.357935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.358124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.358156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.358268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.358303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.358494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.358527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.358773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.358806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.358982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.359015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.359194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.359235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.359422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.359456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.359575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.359608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.359800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.359832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.360038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.360072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.360186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.360226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.360462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.360494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.360693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.360726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.360839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.360873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.361050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.361083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.361273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.361308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.361508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.361541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.361753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.361786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.361901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.101 [2024-11-20 10:44:09.361935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.101 qpair failed and we were unable to recover it. 00:26:29.101 [2024-11-20 10:44:09.362041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.362074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.362341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.362375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.362566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.362599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.362730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.362762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.363026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.363058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.363267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.363302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.363480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.363522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.363711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.363744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.363928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.363960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.364232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.364267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.364453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.364485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.364667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.364700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.364956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.364990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.365195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.365255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.365531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.365565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.365685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.365717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.365892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.365925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.366165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.366197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.366321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.366354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.366460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.366493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.366746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.366778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.367014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.367047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.367175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.367219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.367338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.367370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.367490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.367524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.367696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.367729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.102 [2024-11-20 10:44:09.367964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.102 [2024-11-20 10:44:09.367996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.102 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.368109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.368142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.368288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.368323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.368442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.368474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.368654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.368688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.368803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.368836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.369005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.369038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.369226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.369261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.369529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.369562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.369766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.369799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.369935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.369968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.370085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.370119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.370298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.370331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.370452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.370486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.370620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.370653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.370895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.370928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.371128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.371161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.371434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.371468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.371596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.371628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.371809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.371842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.371972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.372010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.372193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.372234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.372417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.372449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.372701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.372734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.372863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.372896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.373077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.373110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.373283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.373318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.373450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.373483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.373594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.373628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.373800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.373832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.373948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.373981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.374220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.103 [2024-11-20 10:44:09.374254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.103 qpair failed and we were unable to recover it. 00:26:29.103 [2024-11-20 10:44:09.374429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.374462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.374567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.374600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.374725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.374758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.374986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.375019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.375198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.375239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.375501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.375533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.375709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.375741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.375914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.375948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.376218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.376252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.376437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.376470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.376714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.376746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.376877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.376910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.377034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.377066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.377194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.377248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.377500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.377532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.377663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.377697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.377826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.377860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.378041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.378074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.378340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.378374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.378612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.378645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.378828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.378861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.379039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.379072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.379196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.379237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.379408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.379441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.379682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.379716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.379904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.379936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.380172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.380213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.380478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.380511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.380634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.380674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.104 qpair failed and we were unable to recover it. 00:26:29.104 [2024-11-20 10:44:09.380880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.104 [2024-11-20 10:44:09.380913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.381096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.381129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.381329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.381364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.381545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.381579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.381693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.381726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.381918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.381950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.382135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.382168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.382403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.382438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.382621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.382653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.382857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.382891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.383157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.383190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.383384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.383418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.383605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.383637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.383826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.383859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.384095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.384129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.384322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.384357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.384619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.384652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.384891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.384925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.385112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.385145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.385331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.385366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.385548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.385580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.385853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.385886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.386006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.386038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.386164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.386197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.386390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.386422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.386539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.386572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.386819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.386894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.387178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.387233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.387431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.387464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.387654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.387687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.105 [2024-11-20 10:44:09.387930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.105 [2024-11-20 10:44:09.387962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.105 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.388169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.388213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.388400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.388433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.388550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.388583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.388757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.388790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.388974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.389006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.389179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.389226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.389354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.389387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.389516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.389549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.389731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.389773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.389901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.389933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.390069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.390102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.390276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.390311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.390442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.390476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.390717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.390749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.390871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.390903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.391077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.391109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.391364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.391397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.391654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.391687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.391903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.391936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.392199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.392240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.392412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.392445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.392656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.392688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.392900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.392933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.393057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.393090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.393229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.393264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.393393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.393426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.393541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.393574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.393767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.393800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.393976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.394009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.394249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.394284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.394407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.394440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.394617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.106 [2024-11-20 10:44:09.394650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.106 qpair failed and we were unable to recover it. 00:26:29.106 [2024-11-20 10:44:09.394836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.394868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.395004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.395036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.395235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.395268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.395511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.395544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.395746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.395779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.395953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.395986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.396210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.396245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.396445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.396478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.396602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.396635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.396750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.396782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.396916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.396948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.397064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.397097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.397271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.397306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.397570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.397602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.397706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.397738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.397854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.397888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.398152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.398184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.398371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.398406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.398530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.398562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.398742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.398775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.398883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.398916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.399056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.399089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.399230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.107 [2024-11-20 10:44:09.399265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.107 qpair failed and we were unable to recover it. 00:26:29.107 [2024-11-20 10:44:09.399528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.399561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.399745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.399778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.399955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.399988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.400259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.400294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.400472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.400504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.400689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.400722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.400965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.400998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.401216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.401250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.401501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.401534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.401794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.401827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.402113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.402145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.402331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.402365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.402566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.402598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.402767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.402801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.402927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.402959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.403173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.403217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.403406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.403439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.403619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.403652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.403863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.403895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.404101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.404134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.404318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.404358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.404537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.404570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.404750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.404782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.405048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.405080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.405196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.405240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.405412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.405444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.405562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.405594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.405780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.405814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.405921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.405952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.406140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.406173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.406363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.108 [2024-11-20 10:44:09.406397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.108 qpair failed and we were unable to recover it. 00:26:29.108 [2024-11-20 10:44:09.406528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.406561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.406678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.406710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.406825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.406859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.407048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.407081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.407256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.407291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.407465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.407497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.407686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.407719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.407993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.408029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.408156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.408188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.408336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.408370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.408549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.408582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.408772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.408806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.408921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.408954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.409147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.409181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.409400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.409434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.409626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.409659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.409782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.409816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.410059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.410092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.410290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.410324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.410587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.410620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.410739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.410771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.411024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.411058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.411233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.411267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.411385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.411417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.411606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.411639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.411879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.411913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.412166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.412200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.412467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.412501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.412615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.412648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.412918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.412963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.413166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.413200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.413319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.413371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.109 qpair failed and we were unable to recover it. 00:26:29.109 [2024-11-20 10:44:09.413552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.109 [2024-11-20 10:44:09.413584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.413833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.413867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.414046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.414079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.414267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.414301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.414474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.414507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.414695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.414728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.414853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.414886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.415008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.415040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.415217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.415251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.415376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.415409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.415591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.415624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.415743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.415777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.416015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.416048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.416167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.416199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.416414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.416448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.416656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.416689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.416879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.416912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.417175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.417217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.417343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.417376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.417561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.417594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.417713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.417746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.417917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.417950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.418130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.418164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.418398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.418433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.418646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.418679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.418814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.418847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.418975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.419009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.419229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.419264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.419476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.419510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.419623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.419656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.419834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.419867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.110 [2024-11-20 10:44:09.419986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.110 [2024-11-20 10:44:09.420018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.110 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.420236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.420271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.420528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.420561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.420666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.420699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.420833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.420865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.421061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.421095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.421273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.421314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.421554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.421588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.421786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.421819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.421999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.422031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.422155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.422189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.422331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.422365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.422564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.422597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.422794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.422827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.422955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.422988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.423173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.423215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.423497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.423530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.423794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.423826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.423999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.424033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.424273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.424307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.424501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.424534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.424652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.424686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.424822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.424855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.425119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.425151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.425338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.425372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.425544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.425577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.425770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.425803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.425912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.425945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.426133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.426165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.426370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.426404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.426599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.111 [2024-11-20 10:44:09.426632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.111 qpair failed and we were unable to recover it. 00:26:29.111 [2024-11-20 10:44:09.426808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.426841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.427094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.427127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.427257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.427292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.427418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.427451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.427556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.427589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.427710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.427743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.427922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.427955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.428081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.428115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.428307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.428343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.428580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.428613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.428799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.428832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.429117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.429150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.429274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.429308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.429489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.429522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.429703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.429736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.429922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.429961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.430225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.430260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.430430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.430463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.430599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.430632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.430813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.430846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.430958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.430991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.431233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.431267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.431450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.431483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.431671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.431705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.431886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.431920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.432090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.432122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.432394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.432429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.432609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.432642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.432830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.432862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.433051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.433085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.433211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.433245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.433373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.433406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.112 [2024-11-20 10:44:09.433523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.112 [2024-11-20 10:44:09.433556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.112 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.433756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.433789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.433967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.434000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.434110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.434143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.434335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.434369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.434606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.434640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.434834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.434867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.435041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.435074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.435199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.435242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.435433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.435467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.435661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.435695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.435931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.435965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.436233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.436268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.436452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.436486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.436669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.436702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.436893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.436926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.437115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.437149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.437342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.437377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.437515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.437548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.437652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.437685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.437791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.437824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.438007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.438040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.438154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.438188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.438388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.438427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.438666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.438701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.438908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.438942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.439073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.113 [2024-11-20 10:44:09.439107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.113 qpair failed and we were unable to recover it. 00:26:29.113 [2024-11-20 10:44:09.439306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.439340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.439557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.439591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.439852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.439886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.440074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.440108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.440228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.440262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.440434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.440467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.440640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.440672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.440862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.440895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.441091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.441124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.441318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.441353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.441532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.441565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.441679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.441711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.441894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.441928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.442109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.442143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.442324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.442358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.442541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.442575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.442768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.442802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.442989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.443022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.443140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.443174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.443400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.443434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.443552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.443585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.443721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.443754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.443874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.443907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.444087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.444120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.444362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.444396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.444525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.444559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.444701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.444734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.444918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.444951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.445139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.445172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.445365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.445398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.445660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.445693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.445864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.445897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.446072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.446106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.114 [2024-11-20 10:44:09.446237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.114 [2024-11-20 10:44:09.446272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.114 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.446397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.446431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.446611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.446644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.446854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.446893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.447009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.447042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.447167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.447213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.447326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.447360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.447552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.447585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.447848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.447881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.448006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.448040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.448163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.448197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.448330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.448363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.448545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.448578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.448775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.448808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.449003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.449036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.449149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.449183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.449400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.449434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.449637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.449671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.449856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.449889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.450094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.450127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.450314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.450350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.450456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.450489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.450623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.450656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.450836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.450869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.451058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.451092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.451214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.451248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.451428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.451461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.451674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.451707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.451910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.451943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.452074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.452108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.452302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.452337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.452521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.452554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.452665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.452698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.115 qpair failed and we were unable to recover it. 00:26:29.115 [2024-11-20 10:44:09.452877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.115 [2024-11-20 10:44:09.452909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.453147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.453181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.453381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.453415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.453530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.453562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.453845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.453878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.454004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.454037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.454297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.454331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.454517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.454550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.454741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.454774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.454949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.454981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.455229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.455275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.455405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.455438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.455553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.455586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.455825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.455858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.455975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.456008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.456221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.456255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.456502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.456536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.456716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.456749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.456864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.456897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.457185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.457225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.457423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.457457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.457667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.457699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.457818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.457851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.458033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.458069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.458333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.458368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.458553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.458586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.458706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.458739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.458922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.458955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.459133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.459167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.459356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.459391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.459578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.459611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.459807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.459840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.116 qpair failed and we were unable to recover it. 00:26:29.116 [2024-11-20 10:44:09.460028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.116 [2024-11-20 10:44:09.460061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.460164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.460197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.460500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.460534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.460649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.460682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.460801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.460834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.460961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.460995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.461187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.461230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.461421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.461454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.461575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.461608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.461738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.461771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.461887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.461921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.462030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.462064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.462306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.462340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.462509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.462543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.462714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.462747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.462989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.463022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.463237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.463272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.463537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.463571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.463753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.463792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.463926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.463959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.464198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.464240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.464355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.464388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.464509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.464542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.464648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.464683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.464947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.464981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.465102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.465135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.465324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.465359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.465535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.465569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.465750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.465784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.465991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.466024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.466143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.466176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.117 [2024-11-20 10:44:09.466377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.117 [2024-11-20 10:44:09.466411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.117 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.466602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.466636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.466768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.466802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.466989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.467022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.467190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.467250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.467440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.467473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.467589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.467622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.467837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.467871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.468134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.468168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.468365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.468400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.468638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.468672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.468910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.468943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.469124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.469156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.469348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.469382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.469576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.469609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.469810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.469843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.469972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.470004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.470112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.470146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.470367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.470401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.470618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.470650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.470894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.470927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.471231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.471265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.471405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.471438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.471634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.471667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.471874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.471907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.472176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.472220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.472392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.472425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.472615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.472653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.118 qpair failed and we were unable to recover it. 00:26:29.118 [2024-11-20 10:44:09.472778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.118 [2024-11-20 10:44:09.472812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.473100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.473133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.473332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.473368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.473543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.473575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.473763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.473797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.473918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.473950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.474146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.474181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.474316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.474349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.474524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.474558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.474744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.474779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.475016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.475049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.475166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.475200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.475467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.475500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.475769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.475803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.475921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.475955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.476165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.476198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.476486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.476520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.476763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.476797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.476990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.477022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.477261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.477295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.477402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.477435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.477622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.477655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.477771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.477805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.478001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.478034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.478152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.478185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.478329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.478362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.478630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.478664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.478784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.478817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.479024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.479056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.479326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.479361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.479546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.479579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.479756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.479789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.480029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.480062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.480289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.119 [2024-11-20 10:44:09.480323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.119 qpair failed and we were unable to recover it. 00:26:29.119 [2024-11-20 10:44:09.480558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.480592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.480810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.480843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.481033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.481066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.481200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.481244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.481443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.481476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.481668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.481706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.481944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.481977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.482183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.482225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.482361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.482395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.482579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.482612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.482764] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.482797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.482912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.482946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.483120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.483154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.483289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.483331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.483464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.483497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.483666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.483698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.483873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.483906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.484008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.484041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.484268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.484303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.484495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.484528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.484669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.484702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.484881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.484914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.485089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.485122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.485307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.485342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.485524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.485556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.485757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.485790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.486054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.486087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.486286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.486320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.486584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.486618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.486827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.486860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.487048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.120 [2024-11-20 10:44:09.487081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.120 qpair failed and we were unable to recover it. 00:26:29.120 [2024-11-20 10:44:09.487240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.487275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.487461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.487496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.487681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.487714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.487895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.487928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.488045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.488078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.488214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.488248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.488515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.488548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.488790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.488823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.489015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.489048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.489223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.489258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.489409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.489442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.489627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.489660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.489777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.489810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.489995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.490029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.490211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.490250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.490424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.490457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.490602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.490635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.490754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.490788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.490967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.491000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.491120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.491153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.491358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.491395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.491501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.491531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.491720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.491753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.491993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.492027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.492269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.492303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.492444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.492477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.492596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.492629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.492870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.492903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.493019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.493052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.493290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.493324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.493586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.493619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.493740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.493773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.493894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.493927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.121 [2024-11-20 10:44:09.494168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.121 [2024-11-20 10:44:09.494208] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.121 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.494392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.494425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.494556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.494589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.494775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.494808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.495006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.495039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.495233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.495268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.495448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.495481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.495669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.495702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.495937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.495971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.496143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.496176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.496374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.496408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.496676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.496709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.496835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.496868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.497136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.497169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.497290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.497325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.497606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.497639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.497808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.497841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.498014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.498047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.498223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.498257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.498521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.498554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.498744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.498776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.498962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.499006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.499199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.499254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.499375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.499409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.499526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.499559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.499666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.499699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.499983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.500016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.500226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.500260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.500390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.500423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.500619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.500653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.500781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.500813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.122 [2024-11-20 10:44:09.500927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.122 [2024-11-20 10:44:09.500959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.122 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.501080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.501113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.501281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.501315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.501422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.501456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.501641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.501674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.501872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.501905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.502169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.502212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.502330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.502364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.502605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.502638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.502775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.502808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.502992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.503024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.503218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.503252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.503453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.503487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.503605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.503636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.503811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.503844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.503958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.503991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.504165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.504198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.504464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.504498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.504693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.504726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.504917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.504950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.505130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.505163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.505450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.505485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.505659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.505692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.505863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.505895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.506174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.506218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.506488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.506520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.506698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.506732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.506918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.506951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.507073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.507106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.507311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.507345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.507476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.507508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.123 [2024-11-20 10:44:09.507641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.123 [2024-11-20 10:44:09.507674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.123 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.507854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.507887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.508015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.508059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.508282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.508315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.508533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.508566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.508729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.508762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.508901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.508934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.509129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.509162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.509345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.509379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.509584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.509616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.509799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.509831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.509947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.509980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.510171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.510211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.510409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.510442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.510632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.510665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.510931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.510963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.511180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.511235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.511449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.511481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.511593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.511626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.511825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.511857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.511978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.512012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.512252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.512286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.512459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.512491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.512606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.512638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.512830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.512863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.513122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.513154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.513351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.513391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.513595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.513627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.513808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.124 [2024-11-20 10:44:09.513841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.124 qpair failed and we were unable to recover it. 00:26:29.124 [2024-11-20 10:44:09.514040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.514073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.514261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.514296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.514562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.514593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.514815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.514848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.514973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.515006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.515246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.515280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.515527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.515560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.515747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.515780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.515897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.515930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.516194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.516236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.516367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.516398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.516676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.516708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.516941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.516974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.517240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.517275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.517483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.517519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.517706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.517739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.518025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.518058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.518188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.518229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.518432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.518465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.518648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.518681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.518866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.518899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.519152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.519186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.519407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.519440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.519632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.519664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.519931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.519965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.520159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.520191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.520373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.520407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.520597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.520629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.520781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.520813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.521020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.521053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.521308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.521344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.521594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.125 [2024-11-20 10:44:09.521626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.125 qpair failed and we were unable to recover it. 00:26:29.125 [2024-11-20 10:44:09.521836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.521869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.522060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.522094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.522347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.522380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.522620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.522652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.522968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.523001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.523133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.523171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.523454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.523488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.523678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.523711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.523829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.523861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.524123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.524156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.524416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.524450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.524571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.524604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.524792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.524825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.524961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.524994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.525254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.525289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.525397] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf90af0 is same with the state(6) to be set 00:26:29.126 [2024-11-20 10:44:09.525664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.525737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.525890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.525927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.526115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.526149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.526346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.526382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.526632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.526664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.526805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.526837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.527092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.527123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.527374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.527408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.527518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.527551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.527674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.527706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.527910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.527944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.528130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.528161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.528364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.528398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.528660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.528692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.528820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.528852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.529102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.529134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.529382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.529423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.529613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.126 [2024-11-20 10:44:09.529645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.126 qpair failed and we were unable to recover it. 00:26:29.126 [2024-11-20 10:44:09.529851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.529884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.530091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.530124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.530383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.530416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.530587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.530619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.530745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.530778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.530898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.530932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.531057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.531089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.531246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.531281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.531520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.531552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.531741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.531775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.531879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.531912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.532087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.532119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.532315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.532349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.532562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.532596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.532862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.532894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.533029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.533062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.533250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.533285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.533411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.533443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.533633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.533665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.533852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.533885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.534148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.534181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.534367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.534400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.534570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.534602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.534808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.534840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.535015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.535048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.535227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.535300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.535500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.535538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.535834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.535869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.536000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.536034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.536155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.536187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.536457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.536493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.536668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.536699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.536882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.536916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.537096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.537128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.537299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.537335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.537477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.537511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.537649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.537681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.537868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.127 [2024-11-20 10:44:09.537903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.127 qpair failed and we were unable to recover it. 00:26:29.127 [2024-11-20 10:44:09.538018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.538060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.538238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.538273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.538480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.538513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.538768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.538801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.538942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.538975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.539232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.539267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.539390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.539423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.539632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.539666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.539848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.539881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.540052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.540086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.540346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.540380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.540498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.540531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.540719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.540752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.540981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.541016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.541196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.541240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.541435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.541469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.541734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.541767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.541902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.541936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.542177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.542221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.542395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.542428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.542536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.542567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.542755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.542789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.543055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.543088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.543275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.543310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.543507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.543540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.543656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.543688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.543927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.543962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.544219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.544291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.544537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.544575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.544755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.544788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.545028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.545061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.545193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.545250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.545360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.545392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.545629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.545661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.545844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.545876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.545994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.546026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.546264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.546300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.128 qpair failed and we were unable to recover it. 00:26:29.128 [2024-11-20 10:44:09.546476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.128 [2024-11-20 10:44:09.546509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.546639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.546672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.546909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.546940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.547056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.547089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.547301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.547337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.547460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.547493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.547754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.547787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.547906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.547937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.548059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.548093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.548294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.548330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.548463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.548494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.548675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.548707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.548839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.548872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.549054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.549088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.549265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.549300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.549546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.549580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.549762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.549795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.550077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.550117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.550297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.550334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.550587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.550619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.550788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.550820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.550944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.550978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.551163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.551197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.551444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.551477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.551618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.551652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.551890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.551922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.552095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.552127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.552306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.552342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.552527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.552561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.552689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.552723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.552848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.552879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.553155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.553188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.553315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.553347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.553478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.553511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.553719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.553753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.553953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.553985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.554164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.554196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.129 [2024-11-20 10:44:09.554400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.129 [2024-11-20 10:44:09.554433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.129 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.554617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.554651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.554859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.554890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.555010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.555043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.555288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.555323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.555440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.555471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.555595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.555627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.555801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.555834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.555986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.556019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.556219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.556252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.556438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.556470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.556592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.556625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.556871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.556904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.557036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.557069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.557267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.557302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.557428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.557460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.557572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.557605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.557780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.557814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.557988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.558021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.558147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.558179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.558316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.558350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.558673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.558745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.558901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.558937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.559061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.559095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.559215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.559252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.559424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.559457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.559646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.559679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.559866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.559899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.560013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.560046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.560317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.560352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.560543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.560576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.560755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.560787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.561018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.561051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.561260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.561294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.561405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.561449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.561582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.561617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.561816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.561849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.562031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.562065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.562304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.562339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.130 [2024-11-20 10:44:09.562447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.130 [2024-11-20 10:44:09.562481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.130 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.562654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.562686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.562879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.562912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.563085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.563119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.563304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.563338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.563454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.563487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.563619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.563651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.563910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.563942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.564061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.564094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.564314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.564349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.564529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.564562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.564711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.564744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.564934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.564967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.565143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.565183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.565365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.565399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.565579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.565612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.565729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.565762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.569230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.569289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.569610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.569649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.569927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.569962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.570140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.570174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.570402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.570434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.570559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.570599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.570787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.570819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.571031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.571063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.571257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.571291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.571412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.571446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.571687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.571719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.571840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.571870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.571994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.572026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.572142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.572175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.572367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.572400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.572617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.572649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.572843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.572877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.573063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.573095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.573280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.573314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.131 [2024-11-20 10:44:09.573454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.131 [2024-11-20 10:44:09.573488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.131 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.573618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.573648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.573829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.573860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.574038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.574067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.574248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.574280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.574463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.574495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.574689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.574719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.574843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.574873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.575047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.575077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.575181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.575219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.575396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.575429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.575534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.575563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.575737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.575766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.575946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.575978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.576154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.576184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.576325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.576357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.576571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.576601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.576702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.576730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.576904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.576934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.577174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.577214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.577384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.577413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.577622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.577652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.577826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.577855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.578024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.578054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.578315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.578347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.578515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.578545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.578664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.578699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.578908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.578937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.579050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.579080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.579329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.579360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.579555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.579584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.579784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.579815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.579936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.579966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.580069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.580098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.580296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.580328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.580520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.580549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.580717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.580747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.580848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.580877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.581067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.581098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.581330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.581361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.581566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.581596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.581797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.132 [2024-11-20 10:44:09.581828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.132 qpair failed and we were unable to recover it. 00:26:29.132 [2024-11-20 10:44:09.581939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.581968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.582089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.582118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.582231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.582263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.582440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.582470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.582579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.582608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.582775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.582806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.582925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.582954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.583074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.583104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.583219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.583251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.583365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.583395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.583574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.583605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.583806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.583836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.583960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.583994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.584266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.584301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.584476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.584509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.584683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.584716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.584889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.584921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.585094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.585127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.585245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.585279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.585400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.585433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.585553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.585586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.585766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.585799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.585917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.585950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.586069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.586102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.586277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.586318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.586431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.586464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.586637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.586669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.586867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.586900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.587077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.587109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.587219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.587254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.133 qpair failed and we were unable to recover it. 00:26:29.133 [2024-11-20 10:44:09.587361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.133 [2024-11-20 10:44:09.587395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.587518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.587550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.587677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.587709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.587851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.587883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.588013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.588046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.588155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.588187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.588318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.588351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.588522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.588555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.588735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.588767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.588949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.588982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.589102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.589135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.589257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.589291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.589481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.589514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.589624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.589654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.589775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.589809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.590075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.590107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.590238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.590272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.590487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.590520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.590706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.590738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.590927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.590960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.591072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.591105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.591300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.591335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.591518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.591551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.591748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.591781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.591974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.592006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.592125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.592157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.592328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.592364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.592670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.592702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.592891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.592923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.593031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.593064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.593244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.593278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.593537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.593569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.593746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.593779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.593900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.593932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.594104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.594144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.134 [2024-11-20 10:44:09.594345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.134 [2024-11-20 10:44:09.594379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.134 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.594488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.594519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.594633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.594666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.594801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.594833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.595006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.595039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.595235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.595270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.595440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.595473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.595589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.595621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.595742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.595775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.595883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.595915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.596174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.596213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.596391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.596423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.596537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.596570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.596685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.596718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.596907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.596940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.597131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.597163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.597436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.597470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.597659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.597692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.597810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.597842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.597967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.597999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.598122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.598156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.598282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.598314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.598427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.598460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.598576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.598608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.598779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.598812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.598918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.598950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.599127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.599160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.599379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.599413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.599535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.599568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.599686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.599719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.599838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.599872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.600045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.600077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.600232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.600265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.600504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.600536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.600657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.600689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.600813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.600845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.601027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.135 [2024-11-20 10:44:09.601060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.135 qpair failed and we were unable to recover it. 00:26:29.135 [2024-11-20 10:44:09.601234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.601269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.601453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.601486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.601676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.601715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.601913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.601945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.602066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.602099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.602221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.602254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.602439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.602472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.602596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.602628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.602807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.602840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.602964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.602996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.603193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.603234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.603459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.603491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.603738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.603770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.603885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.603917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.604045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.604079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.604267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.604302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.604513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.604545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.604667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.604700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.604809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.604841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.605014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.605046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.605165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.605198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.605401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.605433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.605539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.605572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.605685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.605718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.605916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.605948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.606127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.606159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.606433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.606467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.606672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.606704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.606876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.606910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.607110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.607143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.607396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.607431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.607603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.607634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.607752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.607786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.607895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.607927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.608114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.608147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.608301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.608342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.608581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.136 [2024-11-20 10:44:09.608614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.136 qpair failed and we were unable to recover it. 00:26:29.136 [2024-11-20 10:44:09.608804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.608837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.608944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.608976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.609095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.609128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.609345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.609380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.609622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.609656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.609776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.609816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.610081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.610115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.610306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.610340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.610447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.610481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.610597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.610630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.610862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.610894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.611083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.611116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.611318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.611353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.611648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.611681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.611881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.611914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.612035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.612068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.612250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.612288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.612412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.612445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.612633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.612666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.612788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.612821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.612932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.612962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.613199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.613243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.613361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.613393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.613535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.613568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.613809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.613842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.614128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.614160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.614384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.614418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.614536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.614569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.614823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.614856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.615045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.615078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.615368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.615403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.615669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.615702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.615966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.615999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.616116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.616150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.616406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.616440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.616610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.616644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.616815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.616848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.137 qpair failed and we were unable to recover it. 00:26:29.137 [2024-11-20 10:44:09.617056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.137 [2024-11-20 10:44:09.617088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.617276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.617311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.617525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.617558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.617744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.617778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.618050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.618083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.618274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.618309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.618505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.618538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.618725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.618758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.618956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.618995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.619248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.619283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.619492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.619525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.619660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.619693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.619864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.619898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.620004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.620036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.620166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.620198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.620472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.620505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.620618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.620651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.620885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.620917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.621037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.621070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.621241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.621276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.621471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.621504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.621716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.621749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.621945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.621978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.622160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.622193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.622441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.622474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.622652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.622686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.622894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.622928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.623183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.623226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.623332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.623366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.623481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.623512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.623681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.623714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.623928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.623960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.138 [2024-11-20 10:44:09.624155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.138 [2024-11-20 10:44:09.624193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.138 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.624406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.624439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.624559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.624591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.624704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.624738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.624993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.625026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.625151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.625183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.625365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.625399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.625522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.625555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.625744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.625777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.625887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.625919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.626033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.626066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.626263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.626297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.626401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.626434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.626548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.626581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.626699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.626732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.626848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.626881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.627059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.627098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.627294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.627328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.627452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.627484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.627655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.627688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.627901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.627934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.628127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.628159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.628299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.628332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.628454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.628488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.628662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.628695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.628865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.628897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.629085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.629118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.629241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.629275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.629400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.629434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.629561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.629593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.629780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.629812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.629928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.629961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.630151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.630184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.630377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.630409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.630538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.630571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.630750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.630783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.630963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.630997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.631120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.139 [2024-11-20 10:44:09.631153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.139 qpair failed and we were unable to recover it. 00:26:29.139 [2024-11-20 10:44:09.631298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.631332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.631519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.631552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.631813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.631845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.631970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.632003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.632210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.632244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.632410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.632481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.632797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.632868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.633133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.633170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.633288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.633324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.633627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.633660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.633798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.633831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.634033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.634065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.634250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.634286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.634466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.634498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.634601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.634632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.634810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.634843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.634974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.635005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.635255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.635289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.635401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.635432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.635616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.635649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.635911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.635943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.636125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.636157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.636394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.636428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.636538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.636570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.636689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.636721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.636903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.636935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.637057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.637089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.637275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.637310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.637428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.637461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.637629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.637662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.637850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.637882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.638169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.638214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.638397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.638438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.638699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.638732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.638841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.638873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.639051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.639084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.639284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.639319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.639508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.140 [2024-11-20 10:44:09.639540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.140 qpair failed and we were unable to recover it. 00:26:29.140 [2024-11-20 10:44:09.639722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.639755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.639928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.639960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.640147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.640179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.640500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.640535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.640645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.640676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.640865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.640897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.641066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.641099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.641280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.641314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.641504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.641537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.641725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.641757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.641942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.641974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.642177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.642218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.642390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.642423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.642541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.642575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.642777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.642810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.642978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.643010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.643198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.643242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.643452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.643487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.643661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.643695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.643879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.643912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.644044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.644076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.644258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.644297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.644489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.644523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.644710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.644743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.644933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.644966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.645145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.645178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.645373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.645407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.645604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.645636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.645752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.645784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.645980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.646014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.646135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.646169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.646363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.646398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.646517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.646550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.646744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.646777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.646972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.647004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.647245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.647318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.647588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.647625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.647745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.141 [2024-11-20 10:44:09.647780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.141 qpair failed and we were unable to recover it. 00:26:29.141 [2024-11-20 10:44:09.647962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.647995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.648113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.648145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.648328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.648361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.648547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.648580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.648776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.648809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.648990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.649024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.649316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.649351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.649472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.649504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.649637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.649670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.649912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.649946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.650183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.650238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.650501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.650535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.650753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.650786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.650921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.650954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.651141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.651174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.651294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.651326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.651499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.651532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.651658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.651691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.651806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.651839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.652014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.652048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.652171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.652211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.652398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.652432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.652644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.652678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.652923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.652956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.653075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.653107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.653293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.653329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.653521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.653554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.653826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.653861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.654068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.654102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.654283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.654318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.654435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.654468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.654665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.654698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.654953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.142 [2024-11-20 10:44:09.654986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.142 qpair failed and we were unable to recover it. 00:26:29.142 [2024-11-20 10:44:09.655169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.655213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.655342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.655376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.655500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.655533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.655716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.655749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.656036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.656080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.656266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.656301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.656569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.656601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.656735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.656766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.656890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.656923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.657037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.657069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.657267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.657302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.657426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.657460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.657697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.657729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.657866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.657899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.658084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.658117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.658384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.658423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.658597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.658631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.658759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.658800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.659064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.659097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.659226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.659261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.659369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.659403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.659621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.659655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.659836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.659869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.659980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.660013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.660277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.660312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.660424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.660457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.660563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.660596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.660785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.660818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.661003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.661037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.661151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.661183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.661328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.661362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.661541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.661574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.661778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.661811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.662003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.662036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.662143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.662176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.662317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.662351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.662476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.662509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.143 qpair failed and we were unable to recover it. 00:26:29.143 [2024-11-20 10:44:09.662706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.143 [2024-11-20 10:44:09.662740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.662920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.662953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.663125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.663166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.663366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.663401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.663593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.663626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.663867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.663900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.664028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.664061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.664214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.664249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.664377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.664411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.664586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.664620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.664791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.664823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.665006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.665039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.665222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.665258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.665520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.665553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.665745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.665778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.665892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.665924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.666107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.666141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.666282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.666318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.666512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.666545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.666732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.666764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.666956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.666994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.667181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.667225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.667398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.667431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.667619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.667652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.667848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.667882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.668066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.668099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.668383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.668417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.668679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.668712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.668840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.668874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.669074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.669107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.669249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.669284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.669483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.669516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.669801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.669834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.670024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.670057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.670255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.670291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.670420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.670453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.670636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.670669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.670844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.144 [2024-11-20 10:44:09.670876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.144 qpair failed and we were unable to recover it. 00:26:29.144 [2024-11-20 10:44:09.671065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.671097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.671220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.671255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.671375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.671407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.671689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.671721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.671837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.671870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.671980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.672012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.672198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.672239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.672475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.672507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.672678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.672710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.672961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.672995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.673120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.673153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.673361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.673396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.673633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.673665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.673852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.673884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.674123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.674156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.674299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.674333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.674533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.674566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.674750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.674782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.675019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.675052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.675292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.675326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.675583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.675616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.675801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.675834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.676017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.676055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.676266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.676300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.676434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.676471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.676735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.676769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.676985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.677019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.677188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.677231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.677408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.677442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.677678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.677711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.677832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.677865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.678044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.678078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.678181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.678243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.678484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.678517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.678687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.678720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.678922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.678956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.679134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.679168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.145 [2024-11-20 10:44:09.679354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.145 [2024-11-20 10:44:09.679390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.145 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.679581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.679614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.679875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.679908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.680039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.680072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.680359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.680393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.680567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.680600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.680726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.680760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.680934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.680967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.681151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.681184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.681312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.681346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.681553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.681585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.681842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.681876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.682052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.682086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.682194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.682238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.682365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.682398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.682500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.682532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.682645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.682679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.682866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.682899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.683005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.683038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.683225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.683261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.683455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.683489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.683624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.683657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.683773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.683806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.683941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.683975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.684223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.684258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.684449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.684482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.684660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.684694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.684934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.684966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.685140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.685173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.685372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.685407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.685599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.685632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.685820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.685853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.686119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.686153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.686353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.686388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.686633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.686669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.686871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.686904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.687083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.687116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.687294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.146 [2024-11-20 10:44:09.687329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.146 qpair failed and we were unable to recover it. 00:26:29.146 [2024-11-20 10:44:09.687504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.687537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.687663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.687697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.687868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.687901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.688181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.688224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.688489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.688522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.688654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.688687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.688813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.688846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.688978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.689011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.689195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.689237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.689414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.689448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.689574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.689607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.689792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.689825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.690000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.690034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.690138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.690171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.690519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.690603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.690755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.690793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.691068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.691101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.691295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.691331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.691541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.691574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.691688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.691720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.691843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.691875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.692136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.692169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.692428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.692502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.692761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.692797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.692974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.693007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.693247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.693282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.693397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.693430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.693694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.693728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.693927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.693960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.694210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.694246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.694470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.694503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.694765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.694798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.694989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.695023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.147 [2024-11-20 10:44:09.695197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.147 [2024-11-20 10:44:09.695240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.147 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.695417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.695451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.695710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.695743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.695859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.695892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.696028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.696061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.696244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.696278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.696395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.696428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.696598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.696632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.696761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.696794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.696994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.697027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.697224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.697259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.697378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.697411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.697542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.697575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.697746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.697779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.698045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.698077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.698338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.698372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.698544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.698577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.698696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.698730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.698992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.699024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.699134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.699167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.699391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.699426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.699604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.699643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.699903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.699936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.700042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.700075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.700181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.700225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.700412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.700445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.700656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.700690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.700869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.700902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.701121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.701153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.701291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.701325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.701568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.701600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.701778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.701811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.701917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.701948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.702120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.702153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.702362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.702396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.702517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.702549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.702748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.702781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.703044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.148 [2024-11-20 10:44:09.703078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.148 qpair failed and we were unable to recover it. 00:26:29.148 [2024-11-20 10:44:09.703222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.703256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.703468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.703502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.703622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.703655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.703840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.703873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.704119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.704152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.704399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.704433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.704567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.704599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.704724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.704757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.705006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.705039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.705241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.705276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.705408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.705441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.705627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.705659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.705773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.705806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.706066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.706099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.706272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.706308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.706502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.706535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.706712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.706744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.706926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.706959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.707177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.707217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.707390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.707422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.707683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.707715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.707896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.707929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.708191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.708234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.708423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.708466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.708659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.708692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.708800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.708833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.708966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.708999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.709190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.709235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.709346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.709381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.709618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.709651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.709763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.709796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.710057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.710091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.710296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.710331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.710457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.710491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.710665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.710698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.710814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.710848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.711087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.711120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.149 [2024-11-20 10:44:09.711334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.149 [2024-11-20 10:44:09.711369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.149 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.711637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.711670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.711848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.711881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.712050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.712083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.712333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.712367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.712632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.712665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.712848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.712880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.713070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.713103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.713226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.713260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.713502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.713535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.713661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.713694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.713877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.713911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.714195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.714243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.714359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.714393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.714581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.714615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.714818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.714850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.714971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.715004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.715189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.715235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.715427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.715461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.715644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.715677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.715867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.715900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.716004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.716037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.716242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.716277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.716468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.716502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.716641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.716674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.716872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.716905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.717116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.717157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.717351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.717385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.717565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.717599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.717786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.717820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.718003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.718036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.718298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.718332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.718461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.718494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.718669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.718702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.718891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.718924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.719137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.719170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.719315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.719349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.719460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.719493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.150 qpair failed and we were unable to recover it. 00:26:29.150 [2024-11-20 10:44:09.719603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.150 [2024-11-20 10:44:09.719636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.719911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.719944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.720167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.720211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.720427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.720461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.720681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.720714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.720864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.720897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.721019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.721053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.721245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.721280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.721457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.721490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.721611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.721644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.721883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.721917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.722096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.722129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.722370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.722405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.722530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.722563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.722739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.722771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.723037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.723070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.723317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.723352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.723472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.723506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.723677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.723709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.723832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.723865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.724000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.724033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.724283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.724318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.724605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.724639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.724822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.724855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.725073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.725106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.725343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.725377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.725629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.725662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.725870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.725904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.726114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.726152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.726298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.726333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.726547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.726580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.726818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.726850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.727032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.727065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.727302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.727340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.727514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.727547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.727727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.727761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.727933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.727967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.151 qpair failed and we were unable to recover it. 00:26:29.151 [2024-11-20 10:44:09.728170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.151 [2024-11-20 10:44:09.728213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.728385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.728419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.728678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.728711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.728972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.729005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.729244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.729278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.729570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.729603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.729814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.729846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.730023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.730056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.730316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.730351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.730475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.730507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.730625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.730658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.730861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.730894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.731155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.731188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.731438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.731472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.731731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.731764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.732002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.732035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.732323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.732357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.732613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.732647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.732829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.732862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.733053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.733086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.733236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.733271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.733477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.733510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.733713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.733746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.733870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.733903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.734088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.734121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.734247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.734282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.734397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.734430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.734608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.734640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.734774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.734807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.734926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.734959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.735154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.735186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.735395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.735434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.735644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.152 [2024-11-20 10:44:09.735676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.152 qpair failed and we were unable to recover it. 00:26:29.152 [2024-11-20 10:44:09.735895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.735927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.736179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.736219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.736509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.736542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.736804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.736836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.737095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.737127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.737321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.737355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.737537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.737570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.737750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.737781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.737960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.737993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.738248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.738284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.738500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.738531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.738785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.738817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.739060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.739093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.739302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.739337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.739480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.739512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.739684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.739717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.739885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.739918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.740048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.740081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.740341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.740375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.740549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.740581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.740763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.740795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.740970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.741002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.741220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.741255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.741507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.741540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.741722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.741755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.742024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.742057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.742172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.742216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.742394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.742428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.742664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.742698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.742916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.742950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.743133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.743166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.743415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.743450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.743708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.743741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.744007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.744040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.744226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.744261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.744438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.744471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.744730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.744763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.153 [2024-11-20 10:44:09.744957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.153 [2024-11-20 10:44:09.744992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.153 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.745252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.745293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.745515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.745547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.745754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.745788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.745963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.745995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.746189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.746244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.746429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.746463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.746706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.746739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.746921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.746954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.747194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.747238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.747361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.747395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.747574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.747606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.747746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.747781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.747973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.748006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.748287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.748322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.748507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.748541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.748785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.748818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.748955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.748988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.749192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.749235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.749427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.749461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.749703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.749737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.749972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.750005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.750231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.750266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.750548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.750582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.750835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.750868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.751084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.751117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.751370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.751405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.751551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.751585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.751773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.751807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.751995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.752028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.752246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.752281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.752394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.752424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.752724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.752756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.752929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.752962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.753198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.753244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.753435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.753468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.753649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.753681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.753807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.154 [2024-11-20 10:44:09.753839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.154 qpair failed and we were unable to recover it. 00:26:29.154 [2024-11-20 10:44:09.753962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.753995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.754196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.754243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.754515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.754548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.754766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.754806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.755071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.755103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.755377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.755412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.755555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.755588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.755875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.755907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.756169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.756228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.756496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.756528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.756769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.756802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.757040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.757073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.757259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.757293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.757537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.757569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.757861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.757894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.758073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.758106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.758258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.758293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.758484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.758521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.758728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.758763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.758962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.758996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.759256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.759292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.759532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.759566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.759743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.759776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.760046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.760079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.760278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.760314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.760502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.760535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.760770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.760804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.760935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.760968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.761217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.761251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.761447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.761481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.761643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.761718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.761993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.762030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.762338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.762376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.762636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.762671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.762801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.762834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.763025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.763057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.763319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.763353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.763594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.155 [2024-11-20 10:44:09.763627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.155 qpair failed and we were unable to recover it. 00:26:29.155 [2024-11-20 10:44:09.763815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.763848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.764114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.764147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.764345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.764380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.764628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.764661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.764916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.764950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.765262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.765296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.765437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.765469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.765609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.765641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.765882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.765914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.766177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.766221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.766493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.766526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.766660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.766693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.766986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.767019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.767284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.767319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.767498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.767530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.767715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.767748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.767994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.768028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.768268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.768303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.768492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.768525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.768639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.768678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.768963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.768997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.769190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.769232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.769421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.769455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.769660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.769691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.769902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.769934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.770153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.770185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.770384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.770417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.770706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.770740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.771022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.771056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.771198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.771243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.771481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.771513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.771694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.771728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.771937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.771971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.156 qpair failed and we were unable to recover it. 00:26:29.156 [2024-11-20 10:44:09.772116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.156 [2024-11-20 10:44:09.772148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.772428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.772464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.772657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.772689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.772900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.772933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.773135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.773166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.773310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.773343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.773542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.773575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.773889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.773925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.774191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.774235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.774352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.774382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.774504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.774537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.774733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.774768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.774959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.774992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.775164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.775212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.775407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.775439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.775702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.775736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.775939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.775973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.776171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.776218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.776501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.776533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.776723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.776757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.777000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.777035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.777229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.777264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.777474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.777506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.777767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.777801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.778054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.778086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.778290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.778325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.778495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.778530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.778695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.778767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.779058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.779096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.779292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.779329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.779527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.779562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.779753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.779787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.780087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.780123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.780253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.780288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.780503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.780537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.780794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.780829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.157 [2024-11-20 10:44:09.781097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.157 [2024-11-20 10:44:09.781130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.157 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.781497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.781534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.781759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.781791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.781921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.781955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.782082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.782124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.782239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.782274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.782459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.782492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.782701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.782733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.783012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.783050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.783244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.783279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.783497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.783531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.783772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.783805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.783998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.784031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.784153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.784185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.784426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.784461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.784645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.784679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.784867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.784900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.785097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.785130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.785349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.785386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.785527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.785562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.785850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.437 [2024-11-20 10:44:09.785883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.437 qpair failed and we were unable to recover it. 00:26:29.437 [2024-11-20 10:44:09.786064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.786098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.786293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.786328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.786566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.786598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.786862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.786895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.787106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.787139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.787340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.787374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.787564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.787598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.787790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.787824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.787998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.788031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.788301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.788337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.788603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.788637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.788782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.788816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.788992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.789026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.789295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.789330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.789521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.789554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.789675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.789708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.789904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.789937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.790213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.790249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.790530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.790566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.790828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.790861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.791112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.791146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.791361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.791397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.791583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.791617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.791804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.791845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.792085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.792119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.792366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.792402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.792588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.792621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.792813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.792847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.793020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.793053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.793328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.793366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.793611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.793644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.793897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.793931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.794245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.794281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.794545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.794578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.794788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.794822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.795007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.795041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.795174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.795216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.438 [2024-11-20 10:44:09.795475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.438 [2024-11-20 10:44:09.795508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.438 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.795684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.795719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.795914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.795947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.796156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.796188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.796381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.796416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.796606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.796637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.796822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.796857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.797133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.797165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.797285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.797320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.797503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.797537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.797787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.797822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.798021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.798055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.798182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.798228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.798520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.798592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.798849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.798886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.799068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.799121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.799317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.799354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.799595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.799628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.799804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.799837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.800013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.800046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.800304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.800340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.800489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.800522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.800707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.800742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.800937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.800969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.801243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.801278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.801477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.801510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.801684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.801727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.801926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.801959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.802173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.802222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.802416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.802450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.802608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.802641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.802845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.802877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.803138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.803172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.803398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.803432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.803608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.803641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.803895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.803929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.804225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.804261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.439 qpair failed and we were unable to recover it. 00:26:29.439 [2024-11-20 10:44:09.804520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.439 [2024-11-20 10:44:09.804553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.804813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.804847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.805089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.805122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.805370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.805404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.805588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.805621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.805800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.805834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.806028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.806061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.806193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.806237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.806457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.806491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.806708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.806742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.806983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.807016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.807225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.807260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.807454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.807486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.807592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.807627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.807812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.807846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.807985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.808019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.808308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.808349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.808620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.808654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.808956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.808990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.809175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.809216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.809464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.809498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.809705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.809739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.809879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.809912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.810195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.810238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.810432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.810464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.810645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.810678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.810942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.810975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.811220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.811256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.811475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.811508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.811759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.811792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.812014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.812048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.812250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.812287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.812483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.812516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.812691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.812725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.812967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.813001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.813301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.813338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.813534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.813567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.813821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.813855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.440 qpair failed and we were unable to recover it. 00:26:29.440 [2024-11-20 10:44:09.814087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.440 [2024-11-20 10:44:09.814121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.814383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.814417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.814617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.814651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.814828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.814861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.815124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.815157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.815382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.815418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.815625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.815659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.815983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.816017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.816325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.816360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.816488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.816523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.816705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.816740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.816880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.816914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.817116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.817151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.817417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.817451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.817651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.817684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.817886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.817920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.818112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.818146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.818298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.818332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.818536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.818575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.818771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.818804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.819068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.819119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.819260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.819295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.819554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.819587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.819843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.819877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.820053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.820086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.820369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.820403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.820532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.820565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.820685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.820718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.820928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.820961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.821200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.821243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.821375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.821409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.821625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.821658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.821843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.821877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.822055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.822089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.822335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.822370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.822604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.822647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.441 [2024-11-20 10:44:09.822889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.441 [2024-11-20 10:44:09.822922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.441 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.823179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.823219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.823358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.823393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.823578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.823611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.823824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.823858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.824155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.824188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.824349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.824383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.824581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.824613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.824867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.824900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.825097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.825132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.825407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.825442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.825626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.825659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.825921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.825954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.826083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.826117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.826314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.826348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.826592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.826626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.826969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.827002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.827192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.827233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.827433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.827467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.827710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.827744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.827942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.827975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.828239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.828274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.828526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.828565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.828705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.828739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.829041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.829074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.829327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.829382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.829596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.829629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.829772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.829804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.830006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.830040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.830303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.830338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.830515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.830548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.830729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.830763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.830951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.830985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.831232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.831267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.831405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.442 [2024-11-20 10:44:09.831439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.442 qpair failed and we were unable to recover it. 00:26:29.442 [2024-11-20 10:44:09.831639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.831674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.831865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.831898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.832021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.832054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.832326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.832361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.832618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.832651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.832850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.832883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.833058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.833092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.833296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.833331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.833578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.833611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.833787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.833820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.834072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.834106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.834312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.834347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.834542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.834575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.834770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.834804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.834985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.835020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.835339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.835375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.835571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.835604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.835873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.835908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.836083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.836116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.836246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.836281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.836465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.836499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.836696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.836729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.836981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.837015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.837270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.837306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.837523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.837556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.837751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.837785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.837997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.838030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.838237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.838278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.838502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.838535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.838778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.838812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.839020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.839054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.839173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.839215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.839419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.839453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.839737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.839771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.840017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.840050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.840169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.840214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.840417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.840449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.840633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.443 [2024-11-20 10:44:09.840666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.443 qpair failed and we were unable to recover it. 00:26:29.443 [2024-11-20 10:44:09.840860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.840894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.841161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.841194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.841405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.841440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.841645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.841679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.841980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.842013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.842297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.842333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.842452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.842486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.842734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.842769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.842966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.842999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.843248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.843284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.843478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.843511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.843781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.843815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.843939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.843973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.844240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.844275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.844470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.844504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.844758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.844792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.845072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.845106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.845401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.845436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.845564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.845596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.845840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.845874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.846116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.846149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.846320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.846357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.846495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.846529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.846713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.846747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.846885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.846919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.847059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.847092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.847325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.847362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.847547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.847580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.847824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.847857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.848101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.848140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.848447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.848482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.848685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.848719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.848904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.848938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.849232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.849285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.849558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.849592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.849898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.849931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.850187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.850233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.444 [2024-11-20 10:44:09.850438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.444 [2024-11-20 10:44:09.850471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.444 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.850688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.850723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.850869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.850902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.851148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.851182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.851469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.851504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.851736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.851770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.852026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.852060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.852241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.852277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.852482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.852515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.852713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.852747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.852926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.852959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.853165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.853199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.853394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.853427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.853630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.853663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.853923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.853957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.854227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.854263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.854463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.854496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.854693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.854727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.854838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.854871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.855003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.855037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.855184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.855228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.855427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.855463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.855578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.855610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.855811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.855845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.855989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.856023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.856245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.856282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.856479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.856512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.856621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.856656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.856920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.856953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.857143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.857176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.857393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.857428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.857646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.857680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.857907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.857946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.858153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.858187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.858383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.858417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.858542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.858576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.858785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.858818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.859008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.859041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.445 [2024-11-20 10:44:09.859342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.445 [2024-11-20 10:44:09.859377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.445 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.859576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.859609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.859743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.859776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.860093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.860127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.860399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.860434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.860623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.860656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.860870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.860904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.861151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.861185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.861329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.861364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.861592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.861626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.861867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.861901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.862167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.862211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.862498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.862532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.862730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.862763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.862959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.862993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.863247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.863283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.863433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.863467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.863765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.863800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.864057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.864093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.864245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.864280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.864581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.864614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.864761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.864796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.865066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.865099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.865290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.865326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.865577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.865612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.865816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.865851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.866119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.866152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.866446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.866481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.866671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.866705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.866978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.867012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.867283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.867318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.867566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.867600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.867885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.867920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.868048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.868082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.446 [2024-11-20 10:44:09.868296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.446 [2024-11-20 10:44:09.868337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.446 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.868470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.868504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.868678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.868712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.868854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.868886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.869144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.869179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.869351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.869387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.869587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.869621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.869844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.869879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.870076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.870110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.870369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.870406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.870545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.870579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.870773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.870807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.871079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.871113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.871314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.871350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.871498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.871532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.871718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.871752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.872040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.872074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.872328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.872363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.872591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.872625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.872825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.872859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.873128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.873162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.873354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.873390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.873539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.873573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.873795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.873829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.874125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.874159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.874426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.874462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.874616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.874650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.874803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.874838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.875079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.875112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.875368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.875404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.875588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.875623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.875777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.875810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.876014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.876049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.876321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.876357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.876559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.876593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.876736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.876770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.876967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.877001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.877256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.877292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.447 qpair failed and we were unable to recover it. 00:26:29.447 [2024-11-20 10:44:09.877416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.447 [2024-11-20 10:44:09.877450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.877590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.877625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.877847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.877886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.878086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.878121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.878380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.878417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.878622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.878655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.878809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.878844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.879037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.879071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.879285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.879322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.879470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.879504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.879618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.879653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.879942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.879977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.880279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.880314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.880532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.880566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.880719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.880753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.880964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.880998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.881195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.881242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.881439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.881474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.881599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.881633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.881901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.881935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.882226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.882262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.882417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.882451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.882658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.882692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.882822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.882857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.883150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.883184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.883414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.883449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.883598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.883632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.883844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.883879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.884011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.884045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.884238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.884318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.884550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.884589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.884783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.884819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.885072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.885107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.885264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.885302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.885456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.885490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.885771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.885806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.886037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.886072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.886362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.886398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.448 qpair failed and we were unable to recover it. 00:26:29.448 [2024-11-20 10:44:09.886602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.448 [2024-11-20 10:44:09.886636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.886838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.886873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.887092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.887127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.887356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.887394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.887605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.887648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.887808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.887843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.888103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.888138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.888388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.888423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.888626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.888660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.888994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.889029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.889233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.889269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.889495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.889529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.889743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.889778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.889997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.890031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.890332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.890368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.890578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.890612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.890761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.890796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.891100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.891135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.891425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.891462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.891668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.891702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.892034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.892069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.892337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.892373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.892594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.892627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.892859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.892893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.893184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.893229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.893434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.893470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.893664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.893698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.893859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.893894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.894114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.894147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.894437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.894473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.894688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.894722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.894993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.895029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.895163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.895197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.895349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.895384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.895523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.895557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.895828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.895863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.896046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.896079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.896356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.896393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.449 qpair failed and we were unable to recover it. 00:26:29.449 [2024-11-20 10:44:09.896593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.449 [2024-11-20 10:44:09.896627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.896915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.896950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.897210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.897246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.897441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.897476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.897629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.897663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.897961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.897996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.898191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.898242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.898448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.898483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.898623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.898658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.898802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.898839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.899117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.899151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.899272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.899308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.899541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.899575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.899873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.899908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.900174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.900218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.900501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.900535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.900675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.900709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.901086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.901121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.901349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.901385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.901588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.901622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.901785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.901819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.902111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.902145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.902346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.902383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.902539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.902574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.902782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.902818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.903078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.903113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.903312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.903348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.903556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.903589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.903724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.903758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.904056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.904090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.904338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.904374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.904519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.904554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.904759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.904794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.905082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.905117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.905326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.905362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.905502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.905536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.905807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.905841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.906099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.450 [2024-11-20 10:44:09.906134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.450 qpair failed and we were unable to recover it. 00:26:29.450 [2024-11-20 10:44:09.906391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.906427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.906580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.906613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.906910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.906945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.907199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.907243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.907408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.907443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.907636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.907669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.907784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.907818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.908107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.908141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.908367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.908408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.908630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.908664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.908991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.909028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.909340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.909376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.909587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.909621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.909803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.909838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.909996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.910030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.910311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.910347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.910568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.910603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.910803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.910838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.910966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.911001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.911200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.911245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.911446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.911480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.911666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.911699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.911923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.911958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.912160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.912195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.912353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.912387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.912571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.912606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.912902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.912938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.913226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.913264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.913546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.913580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.913786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.913821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.914015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.914049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.914262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.914298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.914497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.914532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.451 [2024-11-20 10:44:09.914717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.451 [2024-11-20 10:44:09.914751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.451 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.915067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.915103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.915315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.915352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.915606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.915640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.915891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.915926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.916246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.916281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.916422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.916456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.916581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.916614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.916890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.916925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.917061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.917095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.917283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.917319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.917600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.917634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.917888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.917923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.918119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.918153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.918306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.918341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.918493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.918532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.918788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.918824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.919086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.919121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.919382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.919419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.919542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.919575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.919780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.919814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.920026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.920060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.920195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.920256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.920455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.920490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.920702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.920737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.921047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.921082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.921224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.921260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.921479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.921512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.921740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.921775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.921903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.921938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.922133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.922168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.922333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.922368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.922565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.922600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.922899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.922934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.923157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.923191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.923483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.923518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.923679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.923714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.924006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.924040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.452 [2024-11-20 10:44:09.924247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.452 [2024-11-20 10:44:09.924284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.452 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.924481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.924514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.924674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.924709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.925010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.925043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.925328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.925364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.925623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.925657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.925909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.925943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.926163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.926197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.926391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.926427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.926611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.926646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.926768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.926803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.927085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.927121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.927384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.927420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.927551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.927585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.927789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.927825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.928140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.928174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.928421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.928457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.928715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.928756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.928957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.928993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.929277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.929314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.929527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.929560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.929846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.929881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.930017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.930051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.930275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.930311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.930518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.930552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.930756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.930791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.931068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.931102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.931253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.931290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.931495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.931529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.931784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.931819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.932131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.932165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.932416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.932452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.932593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.932627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.932853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.932889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.933078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.933112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.933309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.933346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.933548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.933582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.933871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.933906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.453 [2024-11-20 10:44:09.934196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.453 [2024-11-20 10:44:09.934239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.453 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.934450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.934485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.934678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.934713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.934947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.934982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.935235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.935271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.935416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.935451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.935751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.935833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.936082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.936121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.936432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.936470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.936744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.936780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.936984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.937019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.937227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.937262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.937518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.937552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.937708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.937743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.937955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.937988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.938193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.938237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.938492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.938526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.938715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.938749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.938962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.938995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.939323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.939361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.939495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.939529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.939791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.939826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.940081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.940118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.940372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.940408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.940538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.940573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.940723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.940756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.940956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.940990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.941286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.941323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.941581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.941616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.941765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.941799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.942027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.942061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.942254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.942290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.942545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.942580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.942733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.942773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.942985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.943019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.943315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.943351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.943553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.943587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.943895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.943929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.944162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.454 [2024-11-20 10:44:09.944195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.454 qpair failed and we were unable to recover it. 00:26:29.454 [2024-11-20 10:44:09.944430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.944465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.944596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.944628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.944847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.944880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.945220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.945255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.945480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.945513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.945786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.945821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.946119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.946153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.946451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.946488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.946713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.946749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.947102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.947136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.947424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.947460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.947661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.947695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.947958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.947993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.948233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.948268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.948474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.948508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.948659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.948694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.949032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.949067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.949344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.949380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.949638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.949672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.949829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.949864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.950046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.950081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.950358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.950395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.950687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.950723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.951027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.951062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.951266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.951302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.951554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.951589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.951792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.951826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.952047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.952082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.952279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.952315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.952590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.952625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.952839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.952874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.953154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.953190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.953360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.953394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.953540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.953575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.455 [2024-11-20 10:44:09.953776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.455 [2024-11-20 10:44:09.953810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.455 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.953946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.953984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.954186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.954245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.954365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.954399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.954543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.954578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.954902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.954938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.955142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.955176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.955319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.955359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.955503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.955535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.955690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.955724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.956060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.956094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.956246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.956282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.956431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.956466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.956691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.956726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.957010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.957045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.957268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.957304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.957509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.957543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.957699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.957734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.957871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.957905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.958041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.958076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.958211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.958249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.958407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.958440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.958654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.958689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.958916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.958949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.959157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.959191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.959465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.959499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.959615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.959649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.959857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.959891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.960165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.960235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.960456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.960491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.960672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.960708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.960857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.960890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.961083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.961116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.961341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.961377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.961506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.961541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.961725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.961759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.962014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.962049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.962275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.962311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.962528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.456 [2024-11-20 10:44:09.962562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.456 qpair failed and we were unable to recover it. 00:26:29.456 [2024-11-20 10:44:09.962825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.962862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.963120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.963156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.963368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.963403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.963537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.963572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.963786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.963821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.964047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.964081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.964222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.964257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.964460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.964494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.964716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.964751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.964942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.964976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.965172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.965217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.965405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.965439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.965718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.965752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.965950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.965984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.966259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.966296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.966433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.966469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.966625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.966660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.966807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.966841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.967130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.967165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.967315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.967351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.967547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.967581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.967709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.967745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.967955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.967989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.968224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.968261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.968447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.968482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.968688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.968723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.968981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.969016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.969223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.969259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.969513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.969549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.969851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.969885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.970085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.970125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.970405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.970443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.970632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.970668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.970883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.970918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.971176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.971220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.971423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.971457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.971739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.971774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.972038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.972072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.457 [2024-11-20 10:44:09.972311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.457 [2024-11-20 10:44:09.972347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.457 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.972605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.972640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.972936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.972971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.973284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.973320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.973577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.973611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.973761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.973796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.974102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.974137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.974417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.974454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.974737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.974772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.975050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.975086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.975287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.975323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.975522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.975556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.975690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.975725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.976051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.976087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.976242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.976278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.976499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.976537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.976762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.976796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.976990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.977025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.977332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.977368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.977632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.977672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.977884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.977920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.978155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.978191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.978386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.978421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.978612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.978648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.978847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.978881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.979079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.979112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.979327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.979362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.979567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.979600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.979792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.979827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.979936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.979970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.980245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.980283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.980522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.980557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.980707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.980743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.981037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.981073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.981263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.981298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.981504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.981539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.981798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.981833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.982095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.982130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.982389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.458 [2024-11-20 10:44:09.982425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.458 qpair failed and we were unable to recover it. 00:26:29.458 [2024-11-20 10:44:09.982633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.982669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.982975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.983010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.983274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.983310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.983510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.983545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.983742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.983777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.984048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.984083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.984331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.984369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.984558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.984594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.984786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.984820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.985015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.985050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.985340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.985376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.985645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.985679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.985826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.985862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.986058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.986092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.986344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.986381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.986660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.986696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.986832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.986867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.987119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.987154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.987368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.987404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.987672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.987706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.987994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.988029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.988225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.988267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.988419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.988453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.988643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.988678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.989011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.989047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.989257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.989293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.989493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.989528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.989762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.989796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.989926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.989961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.990080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.990115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.990391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.990427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.990647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.990680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.990970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.991005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.991122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.991153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.991347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.991381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.991642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.991678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.991920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.991953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.992263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.992298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.459 qpair failed and we were unable to recover it. 00:26:29.459 [2024-11-20 10:44:09.992601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.459 [2024-11-20 10:44:09.992637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.992792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.992826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.993109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.993144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.993425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.993460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.993662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.993696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.993849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.993883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.994166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.994212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.994478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.994513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.994706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.994740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.994951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.994986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.995193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.995260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.995467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.995501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.995705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.995740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.995951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.995987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.996289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.996326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.996461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.996497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.996623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.996657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.996946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.996981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.997236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.997271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.997541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.997575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.997830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.997866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.998119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.998153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.998390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.998427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.998680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.998715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.998970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.999005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.999232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.999268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.999523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.999557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.999712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.999747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:09.999890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:09.999924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:10.000112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:10.000148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:10.000451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:10.000488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:10.000672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:10.000708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:10.000874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:10.000908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:10.001129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:10.001163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:10.001375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.460 [2024-11-20 10:44:10.001411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.460 qpair failed and we were unable to recover it. 00:26:29.460 [2024-11-20 10:44:10.001609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.001644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.001837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.001871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.002082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.002116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.002258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.002294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.002451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.002486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.002629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.002663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.002908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.002942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.003135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.003169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.003444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.003483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.003742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.003779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.003923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.003957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.004152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.004186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.004360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.004396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.004566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.004600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.004796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.004830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.005082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.005116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.005422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.005467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.005603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.005637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.005852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.005887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.006145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.006181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.006317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.006351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.006486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.006520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.006706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.006740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.006986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.007022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.007307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.007345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.007544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.007578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.007886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.007920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.008176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.008221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.008420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.008455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.008655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.008690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.008902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.008938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.009122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.009157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.009327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.009363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.009638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.009673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.009810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.009844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.009976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.010010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.010220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.010257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.010385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.461 [2024-11-20 10:44:10.010419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.461 qpair failed and we were unable to recover it. 00:26:29.461 [2024-11-20 10:44:10.010559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.010593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.010909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.010944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.011137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.011172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.011321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.011357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.011554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.011587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.011783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.011818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.012021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.012055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.012249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.012285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.012542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.012576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.012769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.012803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.012933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.012968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.013123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.013157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.013385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.013421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.013583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.013619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.013782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.013814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.014015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.014047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.014228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.014263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.014385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.014420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.014569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.014602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.014743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.014779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.014963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.015004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.015225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.015259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.015438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.015470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.015701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.015733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.015919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.015951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.016140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.016173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.016501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.016541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.016693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.016729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.016930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.016964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.017121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.017156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.017307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.017343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.017488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.017521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.017849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.017884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.018027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.018064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.018271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.018306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.018559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.018595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.018786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.018820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.019042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.019075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.462 qpair failed and we were unable to recover it. 00:26:29.462 [2024-11-20 10:44:10.019219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.462 [2024-11-20 10:44:10.019254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.019397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.019432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.019632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.019665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.019962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.019997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.020117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.020153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.020330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.020366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.020589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.020625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.020814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.020851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.021064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.021107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.021358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.021395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.021598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.021634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.021895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.021930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.022198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.022245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.022529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.022565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.022833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.022869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.023125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.023163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.023470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.023508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.023752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.023787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.024030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.024066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.024263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.024302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.024440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.024476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.024699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.024735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.024990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.025026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.025245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.025282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.025433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.025470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.025671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.025709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.025912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.025947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.026156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.026192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.026416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.026452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.026660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.026694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.026992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.027028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.027291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.027329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.027536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.027572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.027834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.027869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.028085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.028121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.028258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.028295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.028504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.028539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.028830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.028865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.463 [2024-11-20 10:44:10.029007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.463 [2024-11-20 10:44:10.029041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.463 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.029325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.029367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.029500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.029535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.029844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.029880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.030151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.030186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.030336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.030372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.030596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.030631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.030929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.030965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.031229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.031266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.031466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.031504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.031707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.031743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.031959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.032003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.032292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.032329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.032518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.032553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.032809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.032844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.033101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.033136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.033357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.033394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.033668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.033703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.033892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.033928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.034129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.034164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.034373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.034409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.034668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.034703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.034901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.034938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.035191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.035234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.035421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.035458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.035717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.035752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.035950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.035986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.036181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.036224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.036461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.036497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.036803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.036846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.037086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.037124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.037264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.037302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.037572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.037607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.037822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.037859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.037988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.038022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.464 qpair failed and we were unable to recover it. 00:26:29.464 [2024-11-20 10:44:10.038339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.464 [2024-11-20 10:44:10.038375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.038637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.038672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.038929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.038964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.039221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.039264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.039456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.039491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.039638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.039673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.039867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.039900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.040174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.040219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.040422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.040458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.040603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.040638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.040838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.040872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.041012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.041046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.041250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.041286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.041410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.041451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.041590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.041624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.041818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.041853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.042040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.042074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.042275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.042356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.042679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.042759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.042995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.043034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.043182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.043238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.043428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.043463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.043721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.043756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.043959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.043993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.044181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.044228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.044534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.044568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.044783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.044817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.045071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.045107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.045331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.045370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.045507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.045542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.045731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.045776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.045974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.046008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.046195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.046256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.046449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.046485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.046667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.046701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.046978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.047014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.047222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.047258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.047406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.465 [2024-11-20 10:44:10.047441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.465 qpair failed and we were unable to recover it. 00:26:29.465 [2024-11-20 10:44:10.047576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.047610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.047755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.047790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.047936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.047971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.048151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.048185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.048400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.048435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.048594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.048629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.048756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.048791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.049067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.049102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.049270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.049309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.049557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.049589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.049786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.049819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.050077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.050109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.050348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.050382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.050553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.050585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.050806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.050838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.051066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.051097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.051296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.051328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.052954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.053013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.053189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.053240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.053422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.053484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.053684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.053721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.053950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.053986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.054168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.054216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.054430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.054465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.054598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.054632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.054833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.054868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.055062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.055095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.055231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.055267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.055390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.055425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.055536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.055570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.055825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.055860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.056144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.056179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.056380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.056425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.056634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.056668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.056871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.056906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.057103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.057137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.057378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.057414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.466 [2024-11-20 10:44:10.057714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.466 [2024-11-20 10:44:10.057750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.466 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.057879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.057913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.058070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.058104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.058398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.058434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.058658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.058692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.058897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.058932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.059145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.059179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.059461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.059498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.059685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.059720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.059921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.059955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.060166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.060200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.060424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.060457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.060735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.060771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.060971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.061006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.061137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.061171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.061378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.061458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.061612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.061650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.061796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.061831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.061959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.061992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.062187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.062246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.062502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.062536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.062681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.062714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.062862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.062897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.063097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.063132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.063265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.063301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.063500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.063534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.063678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.063711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.063840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.063874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.064124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.064156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.064292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.064328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.064534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.064568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.064704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.064738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.064855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.064890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.065085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.065120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.065248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.065282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.065496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.065544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.065693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.065726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.065939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.065973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.467 [2024-11-20 10:44:10.066227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.467 [2024-11-20 10:44:10.066263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.467 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.066407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.066441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.066692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.066725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.066935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.066969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.067107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.067140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.067341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.067377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.067650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.067685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.067825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.067859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.068003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.068037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.068337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.068372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.068491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.068525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.068740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.068775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.068905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.068939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.069090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.069125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.069309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.069344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.069529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.069564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.069689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.069723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.069852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.069885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.070163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.070197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.070356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.070391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.070508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.070541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.070740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.070774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.070983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.071017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.071217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.071253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.071601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.071669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.071959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.071995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.072250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.072293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.072441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.072477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.072668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.072702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.072951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.072985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.073172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.073224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.073522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.073558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.073823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.073858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.073999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.074033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.074316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.074354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.074556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.074591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.074721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.074755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.075055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.075091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.468 [2024-11-20 10:44:10.075281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.468 [2024-11-20 10:44:10.075318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.468 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.075509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.075542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.075778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.075812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.075996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.076032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.076237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.076274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.076517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.076554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.076823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.076857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.077133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.077167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.077375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.077411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.077610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.077642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.077841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.077876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.078126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.078160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.078408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.078444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.078647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.078689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.078959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.078998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.079276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.079315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.079584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.079618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.079827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.079862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.080069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.080103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.080308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.080347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.080561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.080600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.080737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.080775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.081029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.081064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.081265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.081303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.081517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.081556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.081756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.081789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.081961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.081995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.082273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.082311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.082568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.082602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.082884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.082932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.083151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.083196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.083379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.083439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.083668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.469 [2024-11-20 10:44:10.083716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.469 qpair failed and we were unable to recover it. 00:26:29.469 [2024-11-20 10:44:10.084034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.084071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.084285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.084321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.084520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.084555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.084741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.084775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.085047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.085088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.085345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.085385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.085579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.085616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.085970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.086008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.086279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.086316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.086589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.086625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.086828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.086862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.087078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.087112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.087389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.087428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.087641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.087683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.088008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.088046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.088333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.088373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.088527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.088562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.088693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.088727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.088990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.089026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.089167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.089210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.089401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.089436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.089639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.089717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.089868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.089905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.090111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.090147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.090359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.090396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.090674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.090709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.090897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.090932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.091196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.091241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.091371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.091405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.091657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.091691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.091966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.092000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.092214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.092250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.092388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.092421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.092702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.092735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.092966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.093011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.093227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.093261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.093515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.093550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.093771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.093804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.470 qpair failed and we were unable to recover it. 00:26:29.470 [2024-11-20 10:44:10.093995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.470 [2024-11-20 10:44:10.094029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.094298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.094333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.094493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.094526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.094675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.094710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.094888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.094921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.095223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.095258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.095404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.095438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.095660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.095694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.095921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.095955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.096234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.096269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.096453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.096486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.096615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.096649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.096925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.096959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.097223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.097259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.097472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.097505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.097684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.097718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.097935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.097969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.098247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.098283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.098565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.098598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.098757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.098791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.099043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.099076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.099284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.099322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.099504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.099539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.099718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.099770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.099924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.099959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.100256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.100293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.100488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.100523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.100721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.100756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.100964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.101000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.101236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.101272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.101419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.101456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.101598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.101633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.101782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.101817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.102084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.102119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.102402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.102438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.102585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.102620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.102780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.102813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.103077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.471 [2024-11-20 10:44:10.103111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.471 qpair failed and we were unable to recover it. 00:26:29.471 [2024-11-20 10:44:10.103335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.103371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.103497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.103531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.103689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.103723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.103987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.104021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.104268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.104303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.104454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.104489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.104620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.104654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.104799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.104832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.105086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.105120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.105359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.105393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.105597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.105631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.105905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.105942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.106144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.106179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.106390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.106424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.106574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.106608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.106739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.106773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.106971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.107006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.107255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.107292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.107552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.107588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.107797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.107830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.108085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.108120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.108412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.108448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.108594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.108628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.108774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.108809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.108944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.108978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.109124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.109169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.109338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.109373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.109527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.109561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.109776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.109810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.110014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.110048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.110200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.110244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.110464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.110497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.110619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.110654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.110786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.110819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.110991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.111024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.111163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.111196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.111364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.111398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.111586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.472 [2024-11-20 10:44:10.111620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.472 qpair failed and we were unable to recover it. 00:26:29.472 [2024-11-20 10:44:10.111878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.111911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.112112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.112145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.112314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.112349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.112556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.112590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.112829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.112863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.113023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.113056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.113234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.113269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.113478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.113509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.113858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.113892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.114236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.114273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.114412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.114443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.114573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.114607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.114728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.114763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.114879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.114913] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.115196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.115249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.115454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.115487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.115616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.115650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.115950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.115986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.116264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.116300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.116504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.116538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.116676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.116709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.116836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.116870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.116999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.117034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.117232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.117268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.117484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.117517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.117656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.117690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.117969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.118003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.118242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.118285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.118537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.118571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.118758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.118791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.119002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.119034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.119288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.119323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.119547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.119580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.119808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.119843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.120046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.120081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.120266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.120301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.120506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.120539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.473 [2024-11-20 10:44:10.120741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.473 [2024-11-20 10:44:10.120774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.473 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.120907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.120941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.121194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.121240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.121441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.121475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.121666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.121700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.121921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.121955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.122229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.122266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.122452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.122485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.122686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.122720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.122855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.122889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.123080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.123114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.123312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.123347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.123551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.123585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.123771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.123805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.123984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.124018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.124225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.124261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.124461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.124496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.124640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.124675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.124885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.124920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.125113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.125148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.125358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.125394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.125528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.125563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.125712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.125745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.126024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.126062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.126257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.126294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.126424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.126458] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.126670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.126704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.126887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.126922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.127118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.127152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.127378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.127413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.127683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.127725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.127853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.127887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.128088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.128122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.128319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.128355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.128629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.474 [2024-11-20 10:44:10.128662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.474 qpair failed and we were unable to recover it. 00:26:29.474 [2024-11-20 10:44:10.128911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.128944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.129157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.129190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.129455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.129489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.129648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.129682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.129996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.130030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.130288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.130324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.130573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.130607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.130860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.130894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.131140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.131173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.131489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.131524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.131649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.131683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.131803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.131836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.132109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.132142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.132437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.132473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.132652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.132686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.132937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.132971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.133087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.133120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.133255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.133291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.133413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.133446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.133674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.133709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.133821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.133853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.134047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.134081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.134338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.134373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.134551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.134586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.134767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.134801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.134940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.134975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.135116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.135150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.135276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.135310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.135432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.135466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.135651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.135685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.135878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.135911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.136058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.136095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.136319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.136354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.136538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.136571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.136771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.136804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.136935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.136973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.137087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.137121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.137256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.475 [2024-11-20 10:44:10.137303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.475 qpair failed and we were unable to recover it. 00:26:29.475 [2024-11-20 10:44:10.137444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.137478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.137591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.137622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.137746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.137780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.137926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.137960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.138070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.138104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.138354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.138390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.138512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.138545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.138744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.138778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.139013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.139046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.139293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.139327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.139469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.139502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.139635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.139669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.139797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.139830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.139969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.140003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.140138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.140172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.140296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.140330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.140460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.140493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.140744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.140777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.140903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.140936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.141048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.141081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.141217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.141253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.141539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.141577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.141769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.141803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.143730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.143789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.476 qpair failed and we were unable to recover it. 00:26:29.476 [2024-11-20 10:44:10.144103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.476 [2024-11-20 10:44:10.144140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.144366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.144402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.144528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.144559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.144750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.144783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.144895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.144927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.145126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.145158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.145351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.145386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.145502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.145534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.145654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.145688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.145893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.145926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.146125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.146159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.146304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.146339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.146469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.146503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.146645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.146683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.146806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.146839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.147030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.147063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.147193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.147235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.147357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.147401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.147619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.147654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.147842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.147874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.148015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.148048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.148232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.148268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.148400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.148434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.148561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.148593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.148739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.148771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.148898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.148930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.149054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.149087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.149289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.149324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.149455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.149488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.149593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.149626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.149830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.149862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.149971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.150003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.755 qpair failed and we were unable to recover it. 00:26:29.755 [2024-11-20 10:44:10.150196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.755 [2024-11-20 10:44:10.150241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.150360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.150393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.150647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.150679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.150844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.150877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.151066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.151098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.151229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.151265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.151393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.151425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.151612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.151644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.151849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.151881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.152001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.152032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.152271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.152304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.152495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.152525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.152631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.152661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.152804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.152834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.152956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.152986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.153238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.153269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.153441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.153471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.153571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.153602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.153787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.153817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.153949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.153980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.154098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.154129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.154225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.154260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.154377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.154408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.154516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.154547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.154657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.154687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.154888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.154919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.155037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.155067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.155261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.155293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.155401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.155432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.155570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.155600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.155717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.155746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.155849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.155880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.156050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.156080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.156341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.156374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.756 [2024-11-20 10:44:10.156480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.756 [2024-11-20 10:44:10.156511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.756 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.156705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.156736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.156861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.156891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.157139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.157169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.157306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.157337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.157459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.157492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.157598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.157629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.157741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.157773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.157888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.157919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.158035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.158067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.158331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.158366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.158589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.158620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.158870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.158900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.159081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.159120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.159260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.159293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.159401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.159434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.159545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.159575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.159702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.159732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.159856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.159885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.160147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.160178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.160352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.160385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.160624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.160654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.160787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.160819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.160990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.161020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.161138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.161168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.161357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.161390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.161637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.161667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.161858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.161895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.162018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.162049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.162186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.162224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.162452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.162482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.162660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.757 [2024-11-20 10:44:10.162691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.757 qpair failed and we were unable to recover it. 00:26:29.757 [2024-11-20 10:44:10.162814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.162850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.163021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.163051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.163161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.163191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.163329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.163360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.163528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.163559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.163759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.163790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.164041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.164071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.164213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.164246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.164408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.164438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.164567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.164597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.164725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.164756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.165023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.165052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.165231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.165264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.165400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.165431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.165639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.165670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.165784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.165815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.166012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.166042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.166290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.166322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.166452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.166481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.166675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.166707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.166960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.166990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.167102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.167133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.167269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.167301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.167548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.167579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.167727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.167763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.167889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.167924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.168122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.168152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.168392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.168424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.168614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.168644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.168785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.168816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.169077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.169107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.169237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.169269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.169391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.169421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.169668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.169698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.169965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.758 [2024-11-20 10:44:10.169995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.758 qpair failed and we were unable to recover it. 00:26:29.758 [2024-11-20 10:44:10.170199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.170245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.170379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.170409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.170669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.170699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.170893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.170923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.171058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.171088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.171273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.171305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.171550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.171580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.171761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.171792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.172022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.172054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.172240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.172275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.172465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.172497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.172628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.172660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.173027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.173060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.173245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.173279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.173513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.173546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.173741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.173774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.173967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.173999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.174190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.174233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.174372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.174404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.174525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.174559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.174757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.174790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.175122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.175155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.175352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.175386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.175517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.175551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.175688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.175721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.175865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.175899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.176156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.176189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.176437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.176472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.176663] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.176696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.176969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.177002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.759 qpair failed and we were unable to recover it. 00:26:29.759 [2024-11-20 10:44:10.177227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.759 [2024-11-20 10:44:10.177262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.177382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.177415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.177624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.177659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.177851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.177883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.178155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.178193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.178347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.178382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.178651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.178684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.178885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.178919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.179047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.179080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.179347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.179386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.179541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.179582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.179737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.179770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.179970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.180004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.180226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.180267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.180397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.180430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.180564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.180597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.180789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.180822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.181033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.181067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.181251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.181287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.181480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.181514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.181699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.181732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.182059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.182093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.182230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.182266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.182399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.182432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.182662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.182697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.182896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.182930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.183130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.183164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.183382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.183418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.183550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.760 [2024-11-20 10:44:10.183584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.760 qpair failed and we were unable to recover it. 00:26:29.760 [2024-11-20 10:44:10.183730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.183763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.183890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.183924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.184171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.184242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.184469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.184502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.184708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.184741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.184975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.185009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.185309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.185344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.185525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.185558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.185711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.185745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.185871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.185905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.186154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.186187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.186401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.186436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.186622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.186655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.186843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.186876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.187120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.187154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.187312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.187348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.187493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.187527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.187675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.187710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.187924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.187957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.188159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.188197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.188359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.188394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.188576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.188615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.188839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.188874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.189142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.189175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.189372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.189408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.189613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.189647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.189755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.189788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.189924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.189958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.190253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.190288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.190511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.190545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.190782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.190815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.191009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.191044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.191238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.191273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.761 qpair failed and we were unable to recover it. 00:26:29.761 [2024-11-20 10:44:10.191466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.761 [2024-11-20 10:44:10.191499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.191628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.191663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.191870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.191903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.192075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.192109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.192305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.192341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.192529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.192562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.192682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.192716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.193061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.193094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.193272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.193308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.193556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.193589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.193771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.193805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.194096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.194129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.194307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.194343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.194544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.194578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.194777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.194811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.195015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.195049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.195250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.195285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.195509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.195543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.195791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.195825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.196083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.196117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.196328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.196364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.196615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.196650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.196847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.196880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.197077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.197111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.197324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.197359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.197544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.197578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.197771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.197806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.198020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.198055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.198254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.198296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.198498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.198531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.198789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.198823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.199120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.199154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.199452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.762 [2024-11-20 10:44:10.199488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.762 qpair failed and we were unable to recover it. 00:26:29.762 [2024-11-20 10:44:10.199777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.199812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.200085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.200119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.200308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.200344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.200570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.200603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.200757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.200790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.200957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.200991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.201136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.201168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.201395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.201430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.201581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.201615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.201826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.201859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.202066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.202099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.202307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.202343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.202546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.202578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.202711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.202746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.202888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.202922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.203064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.203098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.203241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.203276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.203467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.203500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.203694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.203727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.203843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.203879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.204185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.204238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.204385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.204419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.204763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.204841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.205136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.205174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.205475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.205511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.205714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.205747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.206042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.206077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.206294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.206330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.206467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.206500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.206630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.206664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.206946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.206979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.207255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.207289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.207486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.763 [2024-11-20 10:44:10.207520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.763 qpair failed and we were unable to recover it. 00:26:29.763 [2024-11-20 10:44:10.207770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.207805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.207933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.207965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.208219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.208255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.208417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.208452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.208664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.208696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.208898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.208932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.209214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.209250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.209453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.209486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.209684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.209718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.210018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.210052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.210312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.210347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.210527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.210560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.210701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.210735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.210894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.210927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.211115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.211149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.211417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.211453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.211654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.211693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.212039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.212074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.212217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.212251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.212467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.212500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.212679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.212714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.212921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.212954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.213233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.213269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.213469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.213503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.213699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.213733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.213929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.213962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.214152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.214187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.214468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.214503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.214629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.214663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.214927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.214961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.215172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.764 [2024-11-20 10:44:10.215213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.764 qpair failed and we were unable to recover it. 00:26:29.764 [2024-11-20 10:44:10.215491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.215526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.215722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.215756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.215957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.215990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.216197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.216245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.216471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.216505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.216759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.216792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.217090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.217124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.217394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.217429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.217645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.217679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.217878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.217912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.218228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.218265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.218552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.218586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.218729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.218763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.219025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.219059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.219346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.219381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.219604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.219638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.219853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.219887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.220183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.220225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.220416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.220451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.220653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.220687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.220960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.220993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.221223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.221259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.221408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.221442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.221712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.221747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.221951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.221986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.222190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.222233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.222535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.765 [2024-11-20 10:44:10.222574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.765 qpair failed and we were unable to recover it. 00:26:29.765 [2024-11-20 10:44:10.222785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.222820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.223098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.223131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.223270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.223306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.223565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.223599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.223882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.223916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.224096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.224130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.224333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.224369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.224647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.224681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.224887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.224920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.225054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.225088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.225228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.225260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.225463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.225495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.225745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.225778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.226083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.226117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.226310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.226346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.226550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.226584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.226871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.226902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.227094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.227128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.227339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.227375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.227632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.227665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.227968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.228002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.228220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.228255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.228466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.228500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.228753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.228786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.228966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.229001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.229186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.229247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.229517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.229558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.229778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.229811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.230104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.230138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.230410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.230444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.230583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.230618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.766 qpair failed and we were unable to recover it. 00:26:29.766 [2024-11-20 10:44:10.230855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.766 [2024-11-20 10:44:10.230890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.231081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.231115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.231420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.231456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.231639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.231673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.231969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.232002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.232248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.232283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.232490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.232524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.232732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.232767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.232910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.232945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.233256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.233291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.233432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.233467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.233768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.233802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.234009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.234043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.234263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.234303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.234502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.234538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.234750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.234783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.234976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.235010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.235228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.235264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.235570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.235604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.235880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.235917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.236212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.236249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.236394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.236429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.236615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.236650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.236941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.236975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.237123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.237158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.237394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.237432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.237698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.237736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.237931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.237972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.238241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.238277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.238541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.238574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.238848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.238882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.239140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.239184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.239481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.239517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.767 [2024-11-20 10:44:10.239748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.767 [2024-11-20 10:44:10.239784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.767 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.239915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.239949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.240218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.240254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.240518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.240559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.240776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.240811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.241041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.241078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.241270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.241307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.241594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.241629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.241905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.241938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.242156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.242191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.242406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.242442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.242728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.242763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.242976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.243009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.243285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.243321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.243605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.243639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.243889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.243922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.244128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.244163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.244376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.244413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.244570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.244604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.244759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.244794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.245017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.245052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.245328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.245387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.245669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.245702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.245900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.245935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.246157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.246191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.246497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.246532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.246791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.246825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.247009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.247043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.247252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.247288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.247494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.247528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.247664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.247707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.768 qpair failed and we were unable to recover it. 00:26:29.768 [2024-11-20 10:44:10.247965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.768 [2024-11-20 10:44:10.247999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.248235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.248270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.248503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.248538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.248733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.248767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.248958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.248993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.249117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.249150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.249412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.249447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.249631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.249664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.249960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.249993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.250198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.250247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.250499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.250533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.250805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.250839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.251046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.251085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.251228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.251264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.251408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.251444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.251629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.251663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.251875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.251908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.252161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.252194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.252415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.252448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.252725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.252759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.253043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.253077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.253244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.253280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.253537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.253572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.253851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.253886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.254167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.254211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.254417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.254451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.254720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.254754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.255014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.255049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.255246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.255281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.255555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.255590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.255780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.255814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.256081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.256119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.256387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.256424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.256711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.256745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.256938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.256972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.769 [2024-11-20 10:44:10.257248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.769 [2024-11-20 10:44:10.257284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.769 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.257548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.257582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.257881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.257916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.258169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.258213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.258511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.258546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.258750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.258790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.259049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.259084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.259380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.259417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.259606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.259639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.259849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.259884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.260139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.260174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.260482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.260517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.260803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.260837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.261033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.261068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.261271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.261307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.261509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.261543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.261792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.261826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.262126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.262160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.262434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.262470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.262670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.262704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.262927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.262961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.263239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.263276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.263583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.263616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.263798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.263834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.264119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.264153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.264471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.264506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.264781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.264815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.265025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.265059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.265291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.265326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.265600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.265634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.265858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.770 [2024-11-20 10:44:10.265893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.770 qpair failed and we were unable to recover it. 00:26:29.770 [2024-11-20 10:44:10.266107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.266141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.266304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.266346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.266556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.266591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.266867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.266900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.267183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.267228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.267445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.267479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.267738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.267772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.267978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.268013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.268217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.268252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.268456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.268491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.268719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.268752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.269050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.269084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.269298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.269336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.269522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.269556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.269804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.269839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.270043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.270078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.270352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.270388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.270545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.270579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.270758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.270793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.271095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.271129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.271380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.271417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.271607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.271641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.271774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.271808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.272071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.272103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.272354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.272390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.272587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.272621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.272769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.771 [2024-11-20 10:44:10.272802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.771 qpair failed and we were unable to recover it. 00:26:29.771 [2024-11-20 10:44:10.273023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.273056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.273281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.273315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.273600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.273635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.273909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.273943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.274146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.274180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.274444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.274479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.274662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.274697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.274989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.275022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.275297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.275333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.275592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.275627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.275809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.275844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.276135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.276171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.276436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.276472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.276725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.276758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.277061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.277096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.277391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.277434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.277706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.277742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.277999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.278033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.278331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.278373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.278632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.278668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.278932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.278967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.279107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.279141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.279435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.279471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.279735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.279768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.280074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.280108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.280370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.280406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.280700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.280739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.280997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.281031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.281298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.281335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.281600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.281633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.281813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.281847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.282106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.282141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.282355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.772 [2024-11-20 10:44:10.282391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.772 qpair failed and we were unable to recover it. 00:26:29.772 [2024-11-20 10:44:10.282596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.282630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.282910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.282958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.283263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.283310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.283614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.283659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.283953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.283989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.284190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.284237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.284426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.284461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.284715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.284748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.284982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.285017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.285255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.285293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.285581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.285616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.285918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.285952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.286224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.286260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.286558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.286592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.286721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.286756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.286965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.287000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.287278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.287313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.287568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.287603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.287788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.287822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.288076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.288110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.288389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.288426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.288641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.288676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.288897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.288932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.289241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.289279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.289481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.289515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.289769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.289804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.290054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.290089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.290393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.290429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.290697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.290732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.290925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.290959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.291230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.291268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.291553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.291588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.291867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.291902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.292087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.773 [2024-11-20 10:44:10.292122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.773 qpair failed and we were unable to recover it. 00:26:29.773 [2024-11-20 10:44:10.292392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.292427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.292697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.292732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.292978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.293013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.293303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.293340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.293566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.293600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.293856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.293891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.294156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.294191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.294423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.294457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.294728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.294762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.294958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.294992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.295253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.295289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.295568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.295604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.295824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.295858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.296134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.296170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.296436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.296471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.296693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.296728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.296987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.297028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.297282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.297318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.297527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.297561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.297826] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.297860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.298190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.298235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.298490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.298524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.298652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.298687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.298880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.298916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.299120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.299155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.299380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.299417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.299671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.299706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.299888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.299922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.774 [2024-11-20 10:44:10.300217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.774 [2024-11-20 10:44:10.300254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.774 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.300561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.300595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.300820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.300855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.301133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.301167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.301455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.301490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.301769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.301803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.302052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.302085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.302370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.302406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.302614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.302648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.302858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.302893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.303157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.303191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.303399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.303435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.303648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.303683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.303880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.303914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.304142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.304177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.304337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.304374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.304579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.304614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.304909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.304943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.305139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.305174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.305466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.305502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.305625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.305660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.305881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.305916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.306121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.306155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.306425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.306461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.306719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.306754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.307008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.307042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.307327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.307364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.307642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.307678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.307960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.307995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.308193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.308239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.775 qpair failed and we were unable to recover it. 00:26:29.775 [2024-11-20 10:44:10.308373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.775 [2024-11-20 10:44:10.308408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.308629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.308663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.308918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.308953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.309150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.309185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.309386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.309421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.309603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.309638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.309783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.309818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.310071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.310105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.310296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.310331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.310634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.310669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.310947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.310983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.311183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.311244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.311451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.311486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.311772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.311807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.312049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.312083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.312349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.312385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.312606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.312641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.312924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.312959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.313219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.313254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.313382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.313417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.313704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.313738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.313943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.313978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.314191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.314237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.314392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.314428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.314558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.314591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.314787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.314820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.315157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.315199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.315492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.776 [2024-11-20 10:44:10.315527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.776 qpair failed and we were unable to recover it. 00:26:29.776 [2024-11-20 10:44:10.315788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.315822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.316030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.316065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.316325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.316361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.316512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.316547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.316849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.316884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.317157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.317191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.317393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.317429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.317665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.317699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.317882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.317917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.318065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.318100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.318374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.318410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.318612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.318647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.318961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.318995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.319218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.319254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.319511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.319547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.319806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.319840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.320091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.320126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.320420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.320457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.320572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.320606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.320882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.320917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.321116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.321150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.321421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.321457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.321738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.321773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.322029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.322065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.322302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.322338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.322534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.322570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.322732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.322767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.323052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.323087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.323275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.323311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.323588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.323623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.323887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.323922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.324198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.324247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.324506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.777 [2024-11-20 10:44:10.324540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.777 qpair failed and we were unable to recover it. 00:26:29.777 [2024-11-20 10:44:10.324740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.324774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.325071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.325105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.325300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.325337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.325530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.325564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.325817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.325852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.326105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.326139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.326449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.326491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.326691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.326726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.326933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.326968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.327168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.327223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.327406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.327440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.327570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.327605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.327881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.327916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.328094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.328128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.328356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.328392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.328672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.328707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.328932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.328966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.329158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.329194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.329390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.329426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.329607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.329641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.329912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.329947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.330227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.330262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.330494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.330528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.330733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.330769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.331026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.331061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.331313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.331349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.331570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.331606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.331880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.331914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.332214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.332250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.332514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.332549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.332838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.332872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.333147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.333183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.333475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.333510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.333781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.333821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.334112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.334146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.778 [2024-11-20 10:44:10.334355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.778 [2024-11-20 10:44:10.334391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.778 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.334677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.334712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.334871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.334906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.335104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.335138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.335426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.335462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.335680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.335714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.335939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.335974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.336112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.336147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.336445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.336480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.336694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.336729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.336919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.336953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.337232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.337268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.337552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.337587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.337864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.337899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.338157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.338191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.338469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.338504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.338783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.338817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.339112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.339147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.339372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.339408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.339603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.339637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.339917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.339952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.340230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.340265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.340478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.340512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.340773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.340808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.341106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.341140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.341346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.341382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.341521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.341555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.341848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.341884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.342165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.342200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.342412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.342446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.342642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.342676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.342881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.779 [2024-11-20 10:44:10.342916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.779 qpair failed and we were unable to recover it. 00:26:29.779 [2024-11-20 10:44:10.343119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.343154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.343446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.343482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.343675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.343711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.343848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.343882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.343992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.344025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.344300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.344336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.344628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.344662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.344905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.344946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.345224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.345261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.345455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.345489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.345677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.345711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.345976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.346011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.346289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.346325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.346546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.346581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.346767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.346802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.347011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.347046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.347314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.347349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.347608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.347643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.347837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.347872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.348057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.348091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.348367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.348404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.348688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.348724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.348908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.348942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.349227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.349262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.349593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.349632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.349880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.349915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.350224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.350262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.350469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.350503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.350773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.350808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.350995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.351030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.351347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.351383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.351685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.780 [2024-11-20 10:44:10.351720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.780 qpair failed and we were unable to recover it. 00:26:29.780 [2024-11-20 10:44:10.351911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.351946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.352228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.352265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.352473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.352514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.352653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.352687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.352940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.352975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.353184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.353231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.353451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.353486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.353647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.353682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.353963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.353998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.354193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.354241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.354495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.354530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.354756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.354792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.354925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.354961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.355148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.355183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.355387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.355422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.355700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.355734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.355880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.355914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.356120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.356153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.356376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.356412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.356692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.356726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.356983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.357018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.357227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.357265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.357453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.357487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.781 [2024-11-20 10:44:10.357601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.781 [2024-11-20 10:44:10.357636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.781 qpair failed and we were unable to recover it. 00:26:29.782 [2024-11-20 10:44:10.357888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-11-20 10:44:10.357923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-11-20 10:44:10.358154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-11-20 10:44:10.358189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-11-20 10:44:10.358478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-11-20 10:44:10.358514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-11-20 10:44:10.358787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-11-20 10:44:10.358821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-11-20 10:44:10.359104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-11-20 10:44:10.359139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-11-20 10:44:10.359448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-11-20 10:44:10.359484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.782 [2024-11-20 10:44:10.359688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.782 [2024-11-20 10:44:10.359722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.782 qpair failed and we were unable to recover it. 00:26:29.783 [2024-11-20 10:44:10.359916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-11-20 10:44:10.359952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-11-20 10:44:10.360215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-11-20 10:44:10.360251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-11-20 10:44:10.360554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-11-20 10:44:10.360589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-11-20 10:44:10.360798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-11-20 10:44:10.360832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-11-20 10:44:10.361101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-11-20 10:44:10.361136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-11-20 10:44:10.361389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-11-20 10:44:10.361426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-11-20 10:44:10.361735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-11-20 10:44:10.361770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-11-20 10:44:10.361995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-11-20 10:44:10.362029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-11-20 10:44:10.362302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-11-20 10:44:10.362339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-11-20 10:44:10.362463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-11-20 10:44:10.362497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-11-20 10:44:10.362752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-11-20 10:44:10.362787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-11-20 10:44:10.363016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-11-20 10:44:10.363051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.783 qpair failed and we were unable to recover it. 00:26:29.783 [2024-11-20 10:44:10.363315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.783 [2024-11-20 10:44:10.363363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.363618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.363653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.363960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.363995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.364211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.364248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.364510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.364544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.364820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.364855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.365105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.365141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.365378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.365414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.365627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.365662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.365914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.365948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.366235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.366271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.366506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.366540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.366808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.366843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.367059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.367094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.367306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.367343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.367549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.367583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.367855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.367889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.368176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.368221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.368506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.368541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.368850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.368884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.369140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.369175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.369470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.369505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.784 [2024-11-20 10:44:10.369741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.784 [2024-11-20 10:44:10.369775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.784 qpair failed and we were unable to recover it. 00:26:29.785 [2024-11-20 10:44:10.370043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.785 [2024-11-20 10:44:10.370076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.785 qpair failed and we were unable to recover it. 00:26:29.785 [2024-11-20 10:44:10.370274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.785 [2024-11-20 10:44:10.370311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.785 qpair failed and we were unable to recover it. 00:26:29.785 [2024-11-20 10:44:10.370495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.785 [2024-11-20 10:44:10.370531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.785 qpair failed and we were unable to recover it. 00:26:29.785 [2024-11-20 10:44:10.370672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.785 [2024-11-20 10:44:10.370706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.785 qpair failed and we were unable to recover it. 00:26:29.785 [2024-11-20 10:44:10.370890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.785 [2024-11-20 10:44:10.370931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.785 qpair failed and we were unable to recover it. 00:26:29.785 [2024-11-20 10:44:10.371225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.785 [2024-11-20 10:44:10.371261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.785 qpair failed and we were unable to recover it. 00:26:29.785 [2024-11-20 10:44:10.371412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.785 [2024-11-20 10:44:10.371446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.785 qpair failed and we were unable to recover it. 00:26:29.785 [2024-11-20 10:44:10.371633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.785 [2024-11-20 10:44:10.371666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.785 qpair failed and we were unable to recover it. 00:26:29.785 [2024-11-20 10:44:10.371884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.785 [2024-11-20 10:44:10.371918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.785 qpair failed and we were unable to recover it. 00:26:29.785 [2024-11-20 10:44:10.372113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.785 [2024-11-20 10:44:10.372147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.785 qpair failed and we were unable to recover it. 00:26:29.785 [2024-11-20 10:44:10.372412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.785 [2024-11-20 10:44:10.372447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.785 qpair failed and we were unable to recover it. 00:26:29.785 [2024-11-20 10:44:10.372635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.785 [2024-11-20 10:44:10.372670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.785 qpair failed and we were unable to recover it. 00:26:29.785 [2024-11-20 10:44:10.372784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.785 [2024-11-20 10:44:10.372818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.785 qpair failed and we were unable to recover it. 00:26:29.785 [2024-11-20 10:44:10.373000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.785 [2024-11-20 10:44:10.373034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.785 qpair failed and we were unable to recover it. 00:26:29.785 [2024-11-20 10:44:10.373234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.785 [2024-11-20 10:44:10.373271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.785 qpair failed and we were unable to recover it. 00:26:29.785 [2024-11-20 10:44:10.373547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.785 [2024-11-20 10:44:10.373582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.785 qpair failed and we were unable to recover it. 00:26:29.785 [2024-11-20 10:44:10.373785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.785 [2024-11-20 10:44:10.373819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.785 qpair failed and we were unable to recover it. 00:26:29.786 [2024-11-20 10:44:10.373999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.786 [2024-11-20 10:44:10.374033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.786 qpair failed and we were unable to recover it. 00:26:29.786 [2024-11-20 10:44:10.374318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.786 [2024-11-20 10:44:10.374356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.786 qpair failed and we were unable to recover it. 00:26:29.786 [2024-11-20 10:44:10.374564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.786 [2024-11-20 10:44:10.374598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.786 qpair failed and we were unable to recover it. 00:26:29.786 [2024-11-20 10:44:10.374799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.786 [2024-11-20 10:44:10.374833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.786 qpair failed and we were unable to recover it. 00:26:29.786 [2024-11-20 10:44:10.375085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.786 [2024-11-20 10:44:10.375120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.786 qpair failed and we were unable to recover it. 00:26:29.786 [2024-11-20 10:44:10.375375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.786 [2024-11-20 10:44:10.375411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.786 qpair failed and we were unable to recover it. 00:26:29.786 [2024-11-20 10:44:10.375541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.786 [2024-11-20 10:44:10.375575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.786 qpair failed and we were unable to recover it. 00:26:29.786 [2024-11-20 10:44:10.375780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.786 [2024-11-20 10:44:10.375815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.786 qpair failed and we were unable to recover it. 00:26:29.786 [2024-11-20 10:44:10.376089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.786 [2024-11-20 10:44:10.376123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.786 qpair failed and we were unable to recover it. 00:26:29.786 [2024-11-20 10:44:10.376319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.786 [2024-11-20 10:44:10.376355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.786 qpair failed and we were unable to recover it. 00:26:29.786 [2024-11-20 10:44:10.376633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.786 [2024-11-20 10:44:10.376668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.786 qpair failed and we were unable to recover it. 00:26:29.786 [2024-11-20 10:44:10.376799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.786 [2024-11-20 10:44:10.376833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.786 qpair failed and we were unable to recover it. 00:26:29.786 [2024-11-20 10:44:10.377109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.786 [2024-11-20 10:44:10.377143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.786 qpair failed and we were unable to recover it. 00:26:29.786 [2024-11-20 10:44:10.377452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.786 [2024-11-20 10:44:10.377488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.786 qpair failed and we were unable to recover it. 00:26:29.786 [2024-11-20 10:44:10.377761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.786 [2024-11-20 10:44:10.377795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.786 qpair failed and we were unable to recover it. 00:26:29.786 [2024-11-20 10:44:10.378087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.786 [2024-11-20 10:44:10.378123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.786 qpair failed and we were unable to recover it. 00:26:29.786 [2024-11-20 10:44:10.378352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.787 [2024-11-20 10:44:10.378389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.787 qpair failed and we were unable to recover it. 00:26:29.787 [2024-11-20 10:44:10.378645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.787 [2024-11-20 10:44:10.378680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.787 qpair failed and we were unable to recover it. 00:26:29.787 [2024-11-20 10:44:10.378982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.787 [2024-11-20 10:44:10.379017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.787 qpair failed and we were unable to recover it. 00:26:29.787 [2024-11-20 10:44:10.379292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.787 [2024-11-20 10:44:10.379329] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.787 qpair failed and we were unable to recover it. 00:26:29.787 [2024-11-20 10:44:10.379613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.787 [2024-11-20 10:44:10.379647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.787 qpair failed and we were unable to recover it. 00:26:29.787 [2024-11-20 10:44:10.379835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.787 [2024-11-20 10:44:10.379869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.787 qpair failed and we were unable to recover it. 00:26:29.787 [2024-11-20 10:44:10.380130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.787 [2024-11-20 10:44:10.380166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.787 qpair failed and we were unable to recover it. 00:26:29.787 [2024-11-20 10:44:10.380332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.787 [2024-11-20 10:44:10.380368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.787 qpair failed and we were unable to recover it. 00:26:29.787 [2024-11-20 10:44:10.380670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.787 [2024-11-20 10:44:10.380705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.787 qpair failed and we were unable to recover it. 00:26:29.787 [2024-11-20 10:44:10.380957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.787 [2024-11-20 10:44:10.380992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.787 qpair failed and we were unable to recover it. 00:26:29.787 [2024-11-20 10:44:10.381295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.787 [2024-11-20 10:44:10.381331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.787 qpair failed and we were unable to recover it. 00:26:29.787 [2024-11-20 10:44:10.381547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.787 [2024-11-20 10:44:10.381582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.787 qpair failed and we were unable to recover it. 00:26:29.787 [2024-11-20 10:44:10.381765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.787 [2024-11-20 10:44:10.381807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.787 qpair failed and we were unable to recover it. 00:26:29.787 [2024-11-20 10:44:10.382078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.787 [2024-11-20 10:44:10.382113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.787 qpair failed and we were unable to recover it. 00:26:29.787 [2024-11-20 10:44:10.382400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.787 [2024-11-20 10:44:10.382437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.787 qpair failed and we were unable to recover it. 00:26:29.787 [2024-11-20 10:44:10.382578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.787 [2024-11-20 10:44:10.382612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.787 qpair failed and we were unable to recover it. 00:26:29.787 [2024-11-20 10:44:10.382813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.787 [2024-11-20 10:44:10.382848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.787 qpair failed and we were unable to recover it. 00:26:29.787 [2024-11-20 10:44:10.383076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.787 [2024-11-20 10:44:10.383111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.787 qpair failed and we were unable to recover it. 00:26:29.787 [2024-11-20 10:44:10.383395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.787 [2024-11-20 10:44:10.383431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.383558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.383594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.383870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.383906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.384100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.384134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.384342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.384379] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.384584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.384620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.384899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.384935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.385144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.385179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.385449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.385485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.385678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.385712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.385991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.386026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.386282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.386320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.386460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.386494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.386677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.386711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.386963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.386998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.387285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.387321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.387520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.387556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.387812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.387847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.388099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.388134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.388337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.388374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.388644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.388679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.388863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.388897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.389171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.389217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.389475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.389510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.389699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.389734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.390013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.390048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.788 qpair failed and we were unable to recover it. 00:26:29.788 [2024-11-20 10:44:10.390213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.788 [2024-11-20 10:44:10.390249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.390454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.390489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.390765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.390799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.391003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.391037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.391227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.391265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.391521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.391557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.391837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.391872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.392015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.392049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.392350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.392387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.392602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.392637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.392781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.392816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.393124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.393158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.393391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.393427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.393632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.393666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.393896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.393931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.394145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.394180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.394374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.394409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.394593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.394628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.394906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.394941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.395213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.395249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.395539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.395575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.395774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.395809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.396081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.396115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.789 [2024-11-20 10:44:10.396327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.789 [2024-11-20 10:44:10.396365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.789 qpair failed and we were unable to recover it. 00:26:29.790 [2024-11-20 10:44:10.396499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.790 [2024-11-20 10:44:10.396534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.790 qpair failed and we were unable to recover it. 00:26:29.790 [2024-11-20 10:44:10.396754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.790 [2024-11-20 10:44:10.396788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.790 qpair failed and we were unable to recover it. 00:26:29.790 [2024-11-20 10:44:10.397099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.790 [2024-11-20 10:44:10.397134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.790 qpair failed and we were unable to recover it. 00:26:29.790 [2024-11-20 10:44:10.397365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.790 [2024-11-20 10:44:10.397401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.790 qpair failed and we were unable to recover it. 00:26:29.790 [2024-11-20 10:44:10.397681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.790 [2024-11-20 10:44:10.397716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.790 qpair failed and we were unable to recover it. 00:26:29.790 [2024-11-20 10:44:10.397994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.790 [2024-11-20 10:44:10.398029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.790 qpair failed and we were unable to recover it. 00:26:29.790 [2024-11-20 10:44:10.398239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.790 [2024-11-20 10:44:10.398275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.790 qpair failed and we were unable to recover it. 00:26:29.790 [2024-11-20 10:44:10.398505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.790 [2024-11-20 10:44:10.398540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.790 qpair failed and we were unable to recover it. 00:26:29.790 [2024-11-20 10:44:10.398800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.790 [2024-11-20 10:44:10.398835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.790 qpair failed and we were unable to recover it. 00:26:29.790 [2024-11-20 10:44:10.398950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.790 [2024-11-20 10:44:10.398985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.790 qpair failed and we were unable to recover it. 00:26:29.790 [2024-11-20 10:44:10.399116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.790 [2024-11-20 10:44:10.399150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.790 qpair failed and we were unable to recover it. 00:26:29.790 [2024-11-20 10:44:10.399459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.790 [2024-11-20 10:44:10.399495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.790 qpair failed and we were unable to recover it. 00:26:29.790 [2024-11-20 10:44:10.399754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.790 [2024-11-20 10:44:10.399795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.790 qpair failed and we were unable to recover it. 00:26:29.791 [2024-11-20 10:44:10.399993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.791 [2024-11-20 10:44:10.400028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-11-20 10:44:10.400226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.791 [2024-11-20 10:44:10.400262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-11-20 10:44:10.400514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.791 [2024-11-20 10:44:10.400548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-11-20 10:44:10.400807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.791 [2024-11-20 10:44:10.400842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-11-20 10:44:10.401037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.791 [2024-11-20 10:44:10.401071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-11-20 10:44:10.401302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.791 [2024-11-20 10:44:10.401338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-11-20 10:44:10.401592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.791 [2024-11-20 10:44:10.401627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-11-20 10:44:10.401928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.791 [2024-11-20 10:44:10.401964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-11-20 10:44:10.402251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.791 [2024-11-20 10:44:10.402288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-11-20 10:44:10.402493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.791 [2024-11-20 10:44:10.402528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-11-20 10:44:10.402783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.791 [2024-11-20 10:44:10.402818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-11-20 10:44:10.403030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.791 [2024-11-20 10:44:10.403065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-11-20 10:44:10.403318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.791 [2024-11-20 10:44:10.403355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-11-20 10:44:10.403558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.791 [2024-11-20 10:44:10.403593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.791 qpair failed and we were unable to recover it. 00:26:29.791 [2024-11-20 10:44:10.403724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.792 [2024-11-20 10:44:10.403759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-11-20 10:44:10.404035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.792 [2024-11-20 10:44:10.404069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-11-20 10:44:10.404344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.792 [2024-11-20 10:44:10.404381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-11-20 10:44:10.404665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.792 [2024-11-20 10:44:10.404700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-11-20 10:44:10.404924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.792 [2024-11-20 10:44:10.404957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-11-20 10:44:10.405145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.792 [2024-11-20 10:44:10.405181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-11-20 10:44:10.405325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.792 [2024-11-20 10:44:10.405360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-11-20 10:44:10.405491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.792 [2024-11-20 10:44:10.405525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-11-20 10:44:10.405751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.792 [2024-11-20 10:44:10.405784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-11-20 10:44:10.405997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.792 [2024-11-20 10:44:10.406030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-11-20 10:44:10.406304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.792 [2024-11-20 10:44:10.406341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-11-20 10:44:10.406602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.792 [2024-11-20 10:44:10.406637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-11-20 10:44:10.406932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.792 [2024-11-20 10:44:10.406966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-11-20 10:44:10.407257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.792 [2024-11-20 10:44:10.407294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.792 [2024-11-20 10:44:10.407499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.792 [2024-11-20 10:44:10.407534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.792 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.407818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.407852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.408131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.408165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.408395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.408431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.408616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.408651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.408854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.408888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.409151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.409186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.409400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.409436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.409645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.409679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.409956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.409991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.410292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.410328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.410532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.410567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.410759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.410799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.411007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.411042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.411321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.411357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.411561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.411596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.411741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.411775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.411960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.411995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.412254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.412290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.412570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.412605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.412888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.412923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.413199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.413260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.413535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.413570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.793 qpair failed and we were unable to recover it. 00:26:29.793 [2024-11-20 10:44:10.413836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.793 [2024-11-20 10:44:10.413871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.794 qpair failed and we were unable to recover it. 00:26:29.794 [2024-11-20 10:44:10.414075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.794 [2024-11-20 10:44:10.414109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.794 qpair failed and we were unable to recover it. 00:26:29.794 [2024-11-20 10:44:10.414402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.794 [2024-11-20 10:44:10.414438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.794 qpair failed and we were unable to recover it. 00:26:29.794 [2024-11-20 10:44:10.414665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.794 [2024-11-20 10:44:10.414700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.794 qpair failed and we were unable to recover it. 00:26:29.794 [2024-11-20 10:44:10.414925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.794 [2024-11-20 10:44:10.414959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.794 qpair failed and we were unable to recover it. 00:26:29.794 [2024-11-20 10:44:10.415171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.794 [2024-11-20 10:44:10.415216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.794 qpair failed and we were unable to recover it. 00:26:29.794 [2024-11-20 10:44:10.415495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.794 [2024-11-20 10:44:10.415529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.794 qpair failed and we were unable to recover it. 00:26:29.794 [2024-11-20 10:44:10.415803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.794 [2024-11-20 10:44:10.415839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.794 qpair failed and we were unable to recover it. 00:26:29.794 [2024-11-20 10:44:10.416035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.794 [2024-11-20 10:44:10.416069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.794 qpair failed and we were unable to recover it. 00:26:29.794 [2024-11-20 10:44:10.416330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.794 [2024-11-20 10:44:10.416366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.794 qpair failed and we were unable to recover it. 00:26:29.794 [2024-11-20 10:44:10.416642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.794 [2024-11-20 10:44:10.416678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.794 qpair failed and we were unable to recover it. 00:26:29.794 [2024-11-20 10:44:10.416961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.794 [2024-11-20 10:44:10.416996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.794 qpair failed and we were unable to recover it. 00:26:29.794 [2024-11-20 10:44:10.417272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.794 [2024-11-20 10:44:10.417309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.794 qpair failed and we were unable to recover it. 00:26:29.794 [2024-11-20 10:44:10.417507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.794 [2024-11-20 10:44:10.417541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.794 qpair failed and we were unable to recover it. 00:26:29.794 [2024-11-20 10:44:10.417735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.794 [2024-11-20 10:44:10.417770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.794 qpair failed and we were unable to recover it. 00:26:29.794 [2024-11-20 10:44:10.417977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.794 [2024-11-20 10:44:10.418012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.794 qpair failed and we were unable to recover it. 00:26:29.794 [2024-11-20 10:44:10.418330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.794 [2024-11-20 10:44:10.418378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.794 qpair failed and we were unable to recover it. 00:26:29.794 [2024-11-20 10:44:10.418602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.794 [2024-11-20 10:44:10.418636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.794 qpair failed and we were unable to recover it. 00:26:29.794 [2024-11-20 10:44:10.418893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.794 [2024-11-20 10:44:10.418928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.794 qpair failed and we were unable to recover it. 00:26:29.794 [2024-11-20 10:44:10.419059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.794 [2024-11-20 10:44:10.419094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.795 qpair failed and we were unable to recover it. 00:26:29.795 [2024-11-20 10:44:10.419372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.795 [2024-11-20 10:44:10.419408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.795 qpair failed and we were unable to recover it. 00:26:29.795 [2024-11-20 10:44:10.419658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.795 [2024-11-20 10:44:10.419693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.795 qpair failed and we were unable to recover it. 00:26:29.795 [2024-11-20 10:44:10.419964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.795 [2024-11-20 10:44:10.419999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.795 qpair failed and we were unable to recover it. 00:26:29.795 [2024-11-20 10:44:10.420213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.795 [2024-11-20 10:44:10.420249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.795 qpair failed and we were unable to recover it. 00:26:29.795 [2024-11-20 10:44:10.420453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.795 [2024-11-20 10:44:10.420487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.795 qpair failed and we were unable to recover it. 00:26:29.795 [2024-11-20 10:44:10.420606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.795 [2024-11-20 10:44:10.420640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.795 qpair failed and we were unable to recover it. 00:26:29.795 [2024-11-20 10:44:10.420842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.795 [2024-11-20 10:44:10.420877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.795 qpair failed and we were unable to recover it. 00:26:29.795 [2024-11-20 10:44:10.421187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.795 [2024-11-20 10:44:10.421241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.795 qpair failed and we were unable to recover it. 00:26:29.795 [2024-11-20 10:44:10.421464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.795 [2024-11-20 10:44:10.421498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.795 qpair failed and we were unable to recover it. 00:26:29.795 [2024-11-20 10:44:10.421704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.795 [2024-11-20 10:44:10.421739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.795 qpair failed and we were unable to recover it. 00:26:29.795 [2024-11-20 10:44:10.422022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.795 [2024-11-20 10:44:10.422058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.795 qpair failed and we were unable to recover it. 00:26:29.795 [2024-11-20 10:44:10.422339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.795 [2024-11-20 10:44:10.422376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.795 qpair failed and we were unable to recover it. 00:26:29.795 [2024-11-20 10:44:10.422557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.795 [2024-11-20 10:44:10.422592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.795 qpair failed and we were unable to recover it. 00:26:29.795 [2024-11-20 10:44:10.422878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.795 [2024-11-20 10:44:10.422912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.795 qpair failed and we were unable to recover it. 00:26:29.795 [2024-11-20 10:44:10.423055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.795 [2024-11-20 10:44:10.423090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.795 qpair failed and we were unable to recover it. 00:26:29.795 [2024-11-20 10:44:10.423365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.795 [2024-11-20 10:44:10.423401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.795 qpair failed and we were unable to recover it. 00:26:29.795 [2024-11-20 10:44:10.423514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.795 [2024-11-20 10:44:10.423549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.795 qpair failed and we were unable to recover it. 00:26:29.796 [2024-11-20 10:44:10.423734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.796 [2024-11-20 10:44:10.423769] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.796 qpair failed and we were unable to recover it. 00:26:29.796 [2024-11-20 10:44:10.423884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.796 [2024-11-20 10:44:10.423918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.796 qpair failed and we were unable to recover it. 00:26:29.796 [2024-11-20 10:44:10.424110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.796 [2024-11-20 10:44:10.424144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.796 qpair failed and we were unable to recover it. 00:26:29.796 [2024-11-20 10:44:10.424409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.796 [2024-11-20 10:44:10.424445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.796 qpair failed and we were unable to recover it. 00:26:29.796 [2024-11-20 10:44:10.424660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.796 [2024-11-20 10:44:10.424695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.796 qpair failed and we were unable to recover it. 00:26:29.796 [2024-11-20 10:44:10.425013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.796 [2024-11-20 10:44:10.425047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.796 qpair failed and we were unable to recover it. 00:26:29.796 [2024-11-20 10:44:10.425243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.796 [2024-11-20 10:44:10.425279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.796 qpair failed and we were unable to recover it. 00:26:29.796 [2024-11-20 10:44:10.425490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.796 [2024-11-20 10:44:10.425525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.796 qpair failed and we were unable to recover it. 00:26:29.796 [2024-11-20 10:44:10.425801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.796 [2024-11-20 10:44:10.425836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.796 qpair failed and we were unable to recover it. 00:26:29.796 [2024-11-20 10:44:10.426155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.796 [2024-11-20 10:44:10.426190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.796 qpair failed and we were unable to recover it. 00:26:29.796 [2024-11-20 10:44:10.426406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.796 [2024-11-20 10:44:10.426442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.796 qpair failed and we were unable to recover it. 00:26:29.796 [2024-11-20 10:44:10.426720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.796 [2024-11-20 10:44:10.426754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.796 qpair failed and we were unable to recover it. 00:26:29.796 [2024-11-20 10:44:10.427014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.796 [2024-11-20 10:44:10.427049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.796 qpair failed and we were unable to recover it. 00:26:29.796 [2024-11-20 10:44:10.427263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.796 [2024-11-20 10:44:10.427300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.796 qpair failed and we were unable to recover it. 00:26:29.796 [2024-11-20 10:44:10.427502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.796 [2024-11-20 10:44:10.427537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.796 qpair failed and we were unable to recover it. 00:26:29.796 [2024-11-20 10:44:10.427734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.796 [2024-11-20 10:44:10.427770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.796 qpair failed and we were unable to recover it. 00:26:29.796 [2024-11-20 10:44:10.428050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.796 [2024-11-20 10:44:10.428086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.796 qpair failed and we were unable to recover it. 00:26:29.796 [2024-11-20 10:44:10.428369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.796 [2024-11-20 10:44:10.428405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.796 qpair failed and we were unable to recover it. 00:26:29.796 [2024-11-20 10:44:10.428680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.428715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.428846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.428881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.429066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.429106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.429255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.429290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.429499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.429533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.429742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.429774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.430002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.430035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.430224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.430259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.430460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.430495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.430773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.430807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.431002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.431036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.431289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.431326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.431588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.431622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.431877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.431911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.432126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.432161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.432415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.432451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.432656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.432690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.432946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.432981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.433171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.433215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.433488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.433522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.433778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.433813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.797 [2024-11-20 10:44:10.434068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.797 [2024-11-20 10:44:10.434104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.797 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.434240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.434275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.434530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.434564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.434869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.434904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.435163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.435197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.435414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.435449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.435704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.435739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.435928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.435962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.436244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.436287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.436548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.436583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.436795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.436830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.437103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.437138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.437378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.437415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.437627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.437662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.437865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.437900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.438159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.438194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.438460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.438495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.438771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.438805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.439022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.439056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.439334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.439370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.439580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.439614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.798 [2024-11-20 10:44:10.439743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.798 [2024-11-20 10:44:10.439778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.798 qpair failed and we were unable to recover it. 00:26:29.799 [2024-11-20 10:44:10.440096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.799 [2024-11-20 10:44:10.440132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.799 qpair failed and we were unable to recover it. 00:26:29.799 [2024-11-20 10:44:10.440412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.799 [2024-11-20 10:44:10.440448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.799 qpair failed and we were unable to recover it. 00:26:29.799 [2024-11-20 10:44:10.440582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.799 [2024-11-20 10:44:10.440615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.799 qpair failed and we were unable to recover it. 00:26:29.799 [2024-11-20 10:44:10.440809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.799 [2024-11-20 10:44:10.440844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.799 qpair failed and we were unable to recover it. 00:26:29.799 [2024-11-20 10:44:10.441121] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.799 [2024-11-20 10:44:10.441157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.799 qpair failed and we were unable to recover it. 00:26:29.799 [2024-11-20 10:44:10.441443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.799 [2024-11-20 10:44:10.441479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.799 qpair failed and we were unable to recover it. 00:26:29.799 [2024-11-20 10:44:10.441717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.799 [2024-11-20 10:44:10.441752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.799 qpair failed and we were unable to recover it. 00:26:29.799 [2024-11-20 10:44:10.442001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.799 [2024-11-20 10:44:10.442036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.799 qpair failed and we were unable to recover it. 00:26:29.799 [2024-11-20 10:44:10.442265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.799 [2024-11-20 10:44:10.442301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.799 qpair failed and we were unable to recover it. 00:26:29.799 [2024-11-20 10:44:10.442580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.799 [2024-11-20 10:44:10.442614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.799 qpair failed and we were unable to recover it. 00:26:29.799 [2024-11-20 10:44:10.442894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.799 [2024-11-20 10:44:10.442929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.799 qpair failed and we were unable to recover it. 00:26:29.799 [2024-11-20 10:44:10.443079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.799 [2024-11-20 10:44:10.443113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.799 qpair failed and we were unable to recover it. 00:26:29.799 [2024-11-20 10:44:10.443316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.799 [2024-11-20 10:44:10.443352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.799 qpair failed and we were unable to recover it. 00:26:29.799 [2024-11-20 10:44:10.443549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.799 [2024-11-20 10:44:10.443583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.799 qpair failed and we were unable to recover it. 00:26:29.799 [2024-11-20 10:44:10.443865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.443900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.444198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.444245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.444448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.444483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.444689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.444723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.444995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.445030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.445338] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.445375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.445698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.445732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.445938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.445973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.446252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.446288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.446576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.446610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.446827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.446862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.447143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.447178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.447515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.447552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.447828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.447869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.448011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.448045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.448260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.448296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.448485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.448519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.448775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.448810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.448995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.449029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.449309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.449345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.449565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.449601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.449808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.449843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.450067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.800 [2024-11-20 10:44:10.450102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.800 qpair failed and we were unable to recover it. 00:26:29.800 [2024-11-20 10:44:10.450361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.450397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.801 [2024-11-20 10:44:10.450661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.450696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.801 [2024-11-20 10:44:10.450989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.451023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.801 [2024-11-20 10:44:10.451295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.451331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.801 [2024-11-20 10:44:10.451556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.451592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.801 [2024-11-20 10:44:10.451845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.451879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.801 [2024-11-20 10:44:10.452023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.452057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.801 [2024-11-20 10:44:10.452307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.452343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.801 [2024-11-20 10:44:10.452669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.452704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.801 [2024-11-20 10:44:10.452987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.453022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.801 [2024-11-20 10:44:10.453299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.453349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.801 [2024-11-20 10:44:10.453620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.453655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.801 [2024-11-20 10:44:10.453888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.453923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.801 [2024-11-20 10:44:10.454200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.454257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.801 [2024-11-20 10:44:10.454480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.454515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.801 [2024-11-20 10:44:10.454644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.454678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.801 [2024-11-20 10:44:10.454935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.454970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.801 [2024-11-20 10:44:10.455228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.455269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.801 [2024-11-20 10:44:10.455459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.455494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.801 [2024-11-20 10:44:10.455694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.801 [2024-11-20 10:44:10.455729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.801 qpair failed and we were unable to recover it. 00:26:29.802 [2024-11-20 10:44:10.455982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.802 [2024-11-20 10:44:10.456018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.802 qpair failed and we were unable to recover it. 00:26:29.802 [2024-11-20 10:44:10.456320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.802 [2024-11-20 10:44:10.456356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.802 qpair failed and we were unable to recover it. 00:26:29.802 [2024-11-20 10:44:10.456568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.802 [2024-11-20 10:44:10.456602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.802 qpair failed and we were unable to recover it. 00:26:29.802 [2024-11-20 10:44:10.456803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.802 [2024-11-20 10:44:10.456838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.802 qpair failed and we were unable to recover it. 00:26:29.802 [2024-11-20 10:44:10.457113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.802 [2024-11-20 10:44:10.457149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.802 qpair failed and we were unable to recover it. 00:26:29.802 [2024-11-20 10:44:10.457364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.802 [2024-11-20 10:44:10.457399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.802 qpair failed and we were unable to recover it. 00:26:29.802 [2024-11-20 10:44:10.457703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.802 [2024-11-20 10:44:10.457737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.802 qpair failed and we were unable to recover it. 00:26:29.802 [2024-11-20 10:44:10.457999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.802 [2024-11-20 10:44:10.458034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.802 qpair failed and we were unable to recover it. 00:26:29.802 [2024-11-20 10:44:10.458357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.802 [2024-11-20 10:44:10.458394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.802 qpair failed and we were unable to recover it. 00:26:29.802 [2024-11-20 10:44:10.458672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.802 [2024-11-20 10:44:10.458707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.802 qpair failed and we were unable to recover it. 00:26:29.802 [2024-11-20 10:44:10.459010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.802 [2024-11-20 10:44:10.459044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.802 qpair failed and we were unable to recover it. 00:26:29.802 [2024-11-20 10:44:10.459307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.802 [2024-11-20 10:44:10.459343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.802 qpair failed and we were unable to recover it. 00:26:29.802 [2024-11-20 10:44:10.459562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.802 [2024-11-20 10:44:10.459596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.802 qpair failed and we were unable to recover it. 00:26:29.802 [2024-11-20 10:44:10.459879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.802 [2024-11-20 10:44:10.459915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.802 qpair failed and we were unable to recover it. 00:26:29.802 [2024-11-20 10:44:10.460127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.802 [2024-11-20 10:44:10.460161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.802 qpair failed and we were unable to recover it. 00:26:29.802 [2024-11-20 10:44:10.460446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.802 [2024-11-20 10:44:10.460484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.802 qpair failed and we were unable to recover it. 00:26:29.802 [2024-11-20 10:44:10.460760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.802 [2024-11-20 10:44:10.460794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.802 qpair failed and we were unable to recover it. 00:26:29.802 [2024-11-20 10:44:10.461054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.802 [2024-11-20 10:44:10.461088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.802 qpair failed and we were unable to recover it. 00:26:29.802 [2024-11-20 10:44:10.461392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.803 [2024-11-20 10:44:10.461429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.803 qpair failed and we were unable to recover it. 00:26:29.803 [2024-11-20 10:44:10.461702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.803 [2024-11-20 10:44:10.461737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.803 qpair failed and we were unable to recover it. 00:26:29.803 [2024-11-20 10:44:10.461990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.803 [2024-11-20 10:44:10.462024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.803 qpair failed and we were unable to recover it. 00:26:29.803 [2024-11-20 10:44:10.462247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.803 [2024-11-20 10:44:10.462284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.803 qpair failed and we were unable to recover it. 00:26:29.803 [2024-11-20 10:44:10.462560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.803 [2024-11-20 10:44:10.462595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.803 qpair failed and we were unable to recover it. 00:26:29.803 [2024-11-20 10:44:10.462780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:29.803 [2024-11-20 10:44:10.462815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:29.803 qpair failed and we were unable to recover it. 00:26:30.079 [2024-11-20 10:44:10.463001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-11-20 10:44:10.463035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-11-20 10:44:10.463293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-11-20 10:44:10.463331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-11-20 10:44:10.463606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-11-20 10:44:10.463640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-11-20 10:44:10.463928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-11-20 10:44:10.463962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-11-20 10:44:10.464222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-11-20 10:44:10.464258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-11-20 10:44:10.464564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-11-20 10:44:10.464599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-11-20 10:44:10.464852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-11-20 10:44:10.464887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-11-20 10:44:10.465085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-11-20 10:44:10.465118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-11-20 10:44:10.465342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.079 [2024-11-20 10:44:10.465378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.079 qpair failed and we were unable to recover it. 00:26:30.079 [2024-11-20 10:44:10.465515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.465550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.465799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.465833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.466087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.466121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.466377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.466413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.466642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.466676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.466943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.466983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.467214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.467252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.467397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.467432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.467721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.467755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.468039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.468075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.468221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.468256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.468460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.468495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.468692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.468727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.468885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.468917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.469104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.469138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.469410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.469446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.469730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.469765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.469901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.469935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.470233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.470270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.470536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.470572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.470865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.470900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.471174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.471217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.471499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.471535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.471738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.471774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.471906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.471941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.472166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.472211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.472413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.472449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.472565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.472599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.472874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.472909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.473185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.473229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.080 [2024-11-20 10:44:10.473445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.080 [2024-11-20 10:44:10.473479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.080 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.473761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.473796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.473988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.474023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.474317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.474354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.474556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.474590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.474844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.474878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.475135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.475170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.475318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.475354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.475607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.475641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.475830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.475864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.476074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.476108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.476300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.476335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.476539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.476573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.476769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.476802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.476928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.476958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.477251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.477287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.477500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.477534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.477813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.477847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.478130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.478164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.478452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.478488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.478704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.478738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.479018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.479052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.479312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.479348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.479552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.479586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.479839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.479873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.480002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.480036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.480328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.480365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.480668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.480703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.480870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.480905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.481063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.481099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.481336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.481374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.481637] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.481672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.481876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.481911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.482096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.482130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.482329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.482369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.482579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.482617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.081 [2024-11-20 10:44:10.482821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.081 [2024-11-20 10:44:10.482858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.081 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.483082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.483127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.483364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.483411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.483709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.483752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.484050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.484088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.484355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.484394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.484685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.484719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.484986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.485027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.485314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.485351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.485545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.485579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.485857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.485891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.486149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.486182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.486490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.486525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.486670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.486705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.486979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.487013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.487270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.487305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.487606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.487640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.487901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.487935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.488230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.488266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.488414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.488449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.488602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.488636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.488792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.488826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.489105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.489141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.489415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.489450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.489714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.489748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.489972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.490006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.490221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.490257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.490482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.490516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.082 [2024-11-20 10:44:10.490772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.082 [2024-11-20 10:44:10.490806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.082 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.491065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.491100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.491287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.491322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.491578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.491612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.491866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.491900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.492089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.492124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.492313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.492351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.492589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.492624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.492907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.492941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.493170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.493233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.493443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.493479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.493731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.493765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.493950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.493985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.494224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.494259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.494484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.494519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.494771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.494806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.494942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.494976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.495184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.495228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.495433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.495468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.495743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.495777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.496063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.496100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.496336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.496372] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.496641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.496676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.496930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.496964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.497268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.497304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.497563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.497599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.497856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.497890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.498196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.498241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.498427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.498462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.498712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.498746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.499044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.499080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.499398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.499434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.499687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.499721] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.083 [2024-11-20 10:44:10.499912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.083 [2024-11-20 10:44:10.499947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.083 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.500155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.500189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.500401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.500436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.500747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.500781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.501047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.501083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.501368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.501405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.501680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.501715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.501996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.502031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.502188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.502235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.502430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.502464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.502664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.502705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.502889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.502923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.503199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.503244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.503543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.503577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.503809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.503851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.504139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.504172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.504441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.504476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.504604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.504638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.504945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.504980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.505261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.505296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.505491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.505525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.505745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.505779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.506042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.506080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.506383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.506418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.506620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.506654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.506910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.506944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.507277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.507314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.507568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.507603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.507890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.507924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.508223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.508258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.508394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.084 [2024-11-20 10:44:10.508430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.084 qpair failed and we were unable to recover it. 00:26:30.084 [2024-11-20 10:44:10.508730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.508765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.508974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.509008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.509284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.509326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.509476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.509512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.509766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.509802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.510030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.510065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.510332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.510368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.510586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.510620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.510873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.510907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.511211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.511247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.511432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.511466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.511700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.511734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.511990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.512025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.512301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.512336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.512558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.512592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.512771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.512806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.513087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.513122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.513417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.513453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.513587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.513621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.513850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.513884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.514180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.514225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.514411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.514445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.514725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.514760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.515022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.515056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.515243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.515286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.515483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.515517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.515769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.515803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.085 qpair failed and we were unable to recover it. 00:26:30.085 [2024-11-20 10:44:10.516104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.085 [2024-11-20 10:44:10.516140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.516361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.516398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.516622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.516665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.516874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.516908] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.517185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.517233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.517458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.517491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.517741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.517775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.517971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.518015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.518287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.518325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.518605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.518641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.518864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.518899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.519118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.519154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.519419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.519456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.519712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.519756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.519967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.520000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.520218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.520256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.520534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.520569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.520766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.520799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.520977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.521012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.521271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.521316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.521543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.521582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.521847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.521881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.521995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.522030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.522234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.522270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.522465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.522508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.522795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.522830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.523028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.523063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.523340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.523384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.523593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.523627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.523903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.523937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.524191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.524236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.524447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.524489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.524679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.524714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.086 [2024-11-20 10:44:10.524963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.086 [2024-11-20 10:44:10.525005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.086 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.525260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.525296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.525556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.525591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.525862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.525896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.526092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.526127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.526390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.526428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.526726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.526763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.526981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.527017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.527128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.527159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.527292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.527326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.527536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.527570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.527872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.527906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.528195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.528242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.528480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.528523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.528823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.528856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.529127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.529162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.529382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.529417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.529647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.529680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.529882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.529917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.530230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.530266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.530475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.530509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.530787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.530822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.531009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.531044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.531305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.531343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.531619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.531654] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.531945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.531979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.532259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.532294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.532532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.532567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.532843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.532877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.087 [2024-11-20 10:44:10.533157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.087 [2024-11-20 10:44:10.533191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.087 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.533392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.533428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.533624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.533657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.533933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.533974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.534170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.534213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.534434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.534468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.534741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.534775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.534987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.535023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.535332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.535367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.535622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.535657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.535959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.535993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.536256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.536292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.536499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.536533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.536807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.536841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.537045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.537079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.537266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.537303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.537499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.537533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.537660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.537695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.537991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.538026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.538241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.538277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.538408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.538443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.538705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.538740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.538992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.539026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.539220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.539255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.539479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.539513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.539721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.539756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.539939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.539973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.540156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.540190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.540427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.540462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.540658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.540692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.540945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.540985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.541283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.541318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.541572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.541606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.088 qpair failed and we were unable to recover it. 00:26:30.088 [2024-11-20 10:44:10.541717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.088 [2024-11-20 10:44:10.541756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.542076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.542111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.542366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.542402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.542606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.542639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.542839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.542874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.543150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.543185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.543322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.543357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.543656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.543690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.543974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.544007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.544219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.544254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.544552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.544587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.544719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.544754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.544960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.544995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.545193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.545241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.545539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.545575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.545691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.545722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.545914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.545948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.546147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.546181] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.546509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.546544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.546675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.546707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.546988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.547021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.547272] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.547308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.547491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.547525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.547732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.547765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.547996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.548031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.548322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.548358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.548544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.548579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.548831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.548866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.549131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.549170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.549461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.549496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.549716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.549751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.549974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.550008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.550263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.089 [2024-11-20 10:44:10.550298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.089 qpair failed and we were unable to recover it. 00:26:30.089 [2024-11-20 10:44:10.550607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.550645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.550928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.550965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.551153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.551186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.551438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.551474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.551639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.551673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.551804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.551846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.552067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.552102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.552297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.552335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.552487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.552524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.552733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.552766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.552965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.552999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.553281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.553317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.553626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.553660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.553916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.553954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.554148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.554182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.554396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.554432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.554642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.554676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.554870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.554903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.555181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.555233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.555509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.555549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.555768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.555803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.555987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.556021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.556218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.556254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.556469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.556504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.556806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.556840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.557104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.557148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.557465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.557504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.557760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.557795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.557995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.558030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.090 [2024-11-20 10:44:10.558236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.090 [2024-11-20 10:44:10.558273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.090 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.558481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.558516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.558712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.558746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.559001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.559047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.559184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.559232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.559534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.559569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.559754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.559788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.559978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.560012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.560233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.560273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.560559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.560596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.560888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.560924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.561171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.561214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.561422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.561456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.561751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.561785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.562052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.562088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.562289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.562327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.562590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.562627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.562911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.562946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.563156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.563191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.563462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.563496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.563781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.563819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.564095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.564131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.564313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.564348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.564623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.564658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.564840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.564874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.565144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.565180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.565398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.565443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.565726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.565763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.565917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.565951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.566152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.566186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.566383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.566418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.091 [2024-11-20 10:44:10.566699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.091 [2024-11-20 10:44:10.566733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.091 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.566938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.566974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.567187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.567237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.567427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.567460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.567665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.567699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.567975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.568008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.568223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.568258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.568389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.568422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.568689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.568729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.569012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.569046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.569247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.569283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.569488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.569523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.569821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.569855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.570117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.570158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.570397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.570432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.570737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.570771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.570976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.571010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.571283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.571319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.571587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.571621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.571912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.571946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.572149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.572184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.572452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.572487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.572744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.572778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.573078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.573113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.573397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.573434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.573654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.573688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.573961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.573996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.574276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.574311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.574523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.574556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.574689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.574724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.574996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.575029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.092 qpair failed and we were unable to recover it. 00:26:30.092 [2024-11-20 10:44:10.575285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.092 [2024-11-20 10:44:10.575321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.575577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.575613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.575829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.575864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.576072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.576106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.576341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.576377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.576580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.576616] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.576811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.576845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.577069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.577104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.577323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.577359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.577565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.577599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.577839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.577874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.578130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.578165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.578320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.578357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.578484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.578519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.578649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.578683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.578974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.579009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.579226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.579263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.579540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.579574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.579710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.579744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.580057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.580099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.580375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.580411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.580689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.580726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.580931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.580967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.581229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.581264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.581468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.581503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.581620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.581653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.581956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.581991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.582263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.582301] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.582582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.582617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.582868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.582902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.583166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.583200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.583413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.583448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.583724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.583759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.583966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.584001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.093 [2024-11-20 10:44:10.584240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.093 [2024-11-20 10:44:10.584272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.093 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.584479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.584516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.584795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.584824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.585020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.585050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.585215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.585247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.585371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.585400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.585655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.585690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.585965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.586000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.586238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.586271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.586541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.586572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.586697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.586727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.587017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.587046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.587250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.587286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.587424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.587454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.587656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.587691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.587963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.587997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.588214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.588255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.588450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.588483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.588743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.588775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.588994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.589032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.589247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.589283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.589543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.589581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.589780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.589812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.589954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.589986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.590283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.590317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.590474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.590507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.590699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.590738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.591048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.591086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.591369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.591407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.591688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.591724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.591860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.591895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.592112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.592147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.592442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.592478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.592631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.094 [2024-11-20 10:44:10.592670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.094 qpair failed and we were unable to recover it. 00:26:30.094 [2024-11-20 10:44:10.593008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.593044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.593320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.593356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.593609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.593645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.593779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.593813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.594100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.594135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.594462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.594501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.594763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.594805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.595074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.595113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.595394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.595435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.595714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.595753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.595987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.596023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.596152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.596188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.596425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.596462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.596679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.596714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.596858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.596892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.597094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.597128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.597330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.597367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.597575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.597609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.597917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.597955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.598087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.598123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.598356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.598392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.598605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.598638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.598953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.598989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.599177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.599238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.599447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.599485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.599706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.599742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.599944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.599979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.600166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.600215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.600374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.600408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.600605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.600640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.600909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.600944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.601149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.601183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.601472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.601510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.095 qpair failed and we were unable to recover it. 00:26:30.095 [2024-11-20 10:44:10.601772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.095 [2024-11-20 10:44:10.601806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.602013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.602047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.602195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.602242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.602445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.602479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.602629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.602664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.602866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.602902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.603087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.603121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.603335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.603373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.603593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.603629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.603867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.603900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.604110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.604146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.604317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.604352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.604502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.604535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.604675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.604712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.604927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.604965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.605083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.605118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.605363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.605399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.605604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.605644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.605935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.605969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.606264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.606302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.606509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.606543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.606700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.606734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.607020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.607054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.607330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.607367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.607526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.607560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.607712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.607747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.608042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.608076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.608305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.608341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.608546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.096 [2024-11-20 10:44:10.608581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.096 qpair failed and we were unable to recover it. 00:26:30.096 [2024-11-20 10:44:10.608785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.608820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.609002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.609037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.609295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.609332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.609541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.609581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.609792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.609827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.610084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.610119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.610330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.610367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.610489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.610521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.610655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.610688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.610946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.610982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.611182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.611229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.611391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.611426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.611541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.611575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.611845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.611879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.612106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.612140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.612450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.612486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.612748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.612782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.613073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.613108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.613321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.613358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.613563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.613597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.613839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.613875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.614156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.614191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.614384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.614420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.614646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.614685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.614807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.614839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.615126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.615161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.615418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.615453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.615588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.615621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.615822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.615855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.616035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.616075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.616279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.616315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.616529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.616562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.616816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.616851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.616970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.097 [2024-11-20 10:44:10.617001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.097 qpair failed and we were unable to recover it. 00:26:30.097 [2024-11-20 10:44:10.617186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.617246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.617450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.617484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.617708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.617742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.618009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.618043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.618335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.618375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.618607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.618644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.618834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.618869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.619172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.619218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.619474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.619508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.619797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.619834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.620026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.620060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.620200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.620246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.620477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.620513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.620790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.620823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.621053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.621088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.621248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.621284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.621496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.621531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.621740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.621775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.621992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.622027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.622282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.622319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.622526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.622561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.622789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.622823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.623017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.623058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.623218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.623253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.623440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.623473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.623667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.623700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.624009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.624043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.624332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.624369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.624553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.624588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.624788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.098 [2024-11-20 10:44:10.624823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.098 qpair failed and we were unable to recover it. 00:26:30.098 [2024-11-20 10:44:10.625075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.625110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.625418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.625455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.625604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.625638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.625936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.625971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.626248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.626289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.626531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.626567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.626833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.626868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.627154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.627188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.627402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.627437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.627644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.627679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.627956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.627990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.628270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.628306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.628493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.628527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.628831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.628865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.629060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.629096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.629279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.629314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.629462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.629495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.629728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.629763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.630082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.630116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.630320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.630356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.630616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.630651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.630930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.630965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.631149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.631184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.631351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.631387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.631650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.631686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.631902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.631937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.632141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.632177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.632492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.632527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.632753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.632788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.633075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.633111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.633386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.633421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.633712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.633747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.099 qpair failed and we were unable to recover it. 00:26:30.099 [2024-11-20 10:44:10.634000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.099 [2024-11-20 10:44:10.634035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.634311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.634353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.634562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.634596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.634890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.634926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.635111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.635145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.635409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.635446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.635628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.635663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.635861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.635895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.636170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.636214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.636422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.636456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.636757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.636791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.637010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.637045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.637312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.637348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.637531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.637565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.637836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.637871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.638088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.638122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.638389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.638425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.638681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.638714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.638921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.638955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.639143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.639176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.639380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.639415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.639606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.639639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.639853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.639888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.640074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.640108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.640386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.640421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.640686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.640719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.640943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.640983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.641275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.641309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.641516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.641557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.641837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.641871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.642151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.642185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.642467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.642503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.642727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.642761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.643043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.643077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.643294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.100 [2024-11-20 10:44:10.643330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.100 qpair failed and we were unable to recover it. 00:26:30.100 [2024-11-20 10:44:10.643488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.643522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.643798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.643831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.644133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.644167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.644487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.644524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.644803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.644837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.645037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.645071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.645374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.645409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.645651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.645686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.645924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.645959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.646228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.646263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.646445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.646479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.646666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.646701] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.646908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.646943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.647130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.647165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.647362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.647399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.647606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.647640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.647906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.647942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.648127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.648162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.648456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.648491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.648715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.648749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.648937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.648971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.649248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.649284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.649548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.649582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.649779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.649815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.650030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.650064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.650316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.650352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.650585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.650620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.650876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.650911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.651107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.651142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.101 [2024-11-20 10:44:10.651414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.101 [2024-11-20 10:44:10.651451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.101 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.651652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.651686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.651867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.651901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.652127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.652162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.652429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.652464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.652692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.652733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.653009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.653043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.653319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.653356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.653552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.653586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.653839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.653873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.654156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.654191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.654413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.654447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.654715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.654750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.654932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.654967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.655189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.655232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.655485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.655519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.655753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.655787] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.655970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.656004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.656187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.656232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.656526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.656563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.656812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.656849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.657151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.657186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.657382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.657417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.657715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.657749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.658024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.658062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.658317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.658356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.658649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.658683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.658967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.659001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.659215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.659250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.659554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.659593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.659783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.659826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.102 qpair failed and we were unable to recover it. 00:26:30.102 [2024-11-20 10:44:10.660109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.102 [2024-11-20 10:44:10.660145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.660383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.660419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.660613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.660647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.660801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.660836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.661057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.661092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.661374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.661413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.661607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.661643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.661855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.661889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.662220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.662257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.662481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.662516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.662707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.662742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.662944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.662979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.663254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.663300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.663598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.663633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.663908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.663943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.664235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.664272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.664542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.664577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.664868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.664905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.665169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.665224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.665421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.665454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.665720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.665755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.665955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.665989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.666136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.666171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.666464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.666502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.666697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.666733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.666926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.666961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.667167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.667214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.667412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.667446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.667700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.667735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.667973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.668011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.103 [2024-11-20 10:44:10.668265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.103 [2024-11-20 10:44:10.668303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.103 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.668588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.668622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.668945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.668980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.669112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.669146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.669412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.669450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.669638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.669674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.669958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.669994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.670270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.670307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.670500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.670535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.670732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.670765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.670993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.671027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.671222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.671259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.671565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.671605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.671858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.671893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.672097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.672131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.672388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.672424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.672543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.672574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.672865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.672899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.673025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.673059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.673256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.673291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.673546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.673581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.673882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.673914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.674181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.674225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.674365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.674400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.674680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.674714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.674994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.675028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.675229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.675265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.675536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.675570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.675761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.675795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.675997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.676030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.676168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.676216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.104 [2024-11-20 10:44:10.676349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.104 [2024-11-20 10:44:10.676383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.104 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.676584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.676618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.676825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.676857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.676969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.677003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.677225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.677261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.677484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.677518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.677795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.677828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.678053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.678086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.678273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.678310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.678574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.678608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.678882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.678917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.679051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.679086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.679386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.679422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.679651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.679687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.679921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.679956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.680140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.680174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.680393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.680428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.680585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.680619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.680894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.680930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.681183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.681226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.681525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.681563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.681835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.681868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.682157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.682194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.682390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.682424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.682628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.682663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.682878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.682924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.683247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.683299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.683621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.683666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.683965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.684001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.684253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.684289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.684472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.684507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.684710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.684744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.684949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.684984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.105 qpair failed and we were unable to recover it. 00:26:30.105 [2024-11-20 10:44:10.685240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.105 [2024-11-20 10:44:10.685275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.685394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.685427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.685629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.685664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.686004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.686039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.686174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.686222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.686409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.686444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.686740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.686776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.687068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.687106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.687232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.687269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.687532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.687568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.687770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.687804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.688068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.688103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.688396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.688434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.688619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.688653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.688767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.688801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.689057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.689093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.689245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.689288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.689497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.689531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.689761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.689794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.690017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.690051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.690239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.690282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.690437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.690476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.690780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.690815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.691095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.691129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.691382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.691418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.691639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.691679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.691869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.691905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.692216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.692254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.692536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.692571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.106 qpair failed and we were unable to recover it. 00:26:30.106 [2024-11-20 10:44:10.692827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.106 [2024-11-20 10:44:10.692862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.693167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.693239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.693790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.693843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.694165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.694217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.694436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.694472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.694752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.694786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.695066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.695099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.695382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.695424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.695732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.695768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.696047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.696082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.696369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.696406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.696553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.696587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.696861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.696896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.697083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.697118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.697321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.697359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.697578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.697615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.697901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.697936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.698138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.698174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.698447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.698482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.698679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.698713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.698992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.699031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.699311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.699350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.699631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.699665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.699946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.699981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.700280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.700316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.700533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.700568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.700800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.700843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.701050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.701083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.701368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.701413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.701628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.701662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.107 qpair failed and we were unable to recover it. 00:26:30.107 [2024-11-20 10:44:10.701868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.107 [2024-11-20 10:44:10.701903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.702216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.702251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.702508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.702545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.702854] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.702891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.703169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.703218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.703491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.703526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.703800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.703833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.704119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.704155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.704433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.704470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.704692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.704727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.705005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.705039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.705321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.705358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.705635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.705671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.705879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.705919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.706151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.706189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.706487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.706524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.706804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.706839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.707038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.707073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.707278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.707315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.707616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.707660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.707850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.707883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.708033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.708068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.708280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.708316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.708620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.708655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.708913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.708947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.709241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.709291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.709572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.709610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.709891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.709926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.710132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.710166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.710470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.710506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.710700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.710735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.711036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.108 [2024-11-20 10:44:10.711071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.108 qpair failed and we were unable to recover it. 00:26:30.108 [2024-11-20 10:44:10.711328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.711366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.711675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.711711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.712012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.712047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.712263] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.712300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.712489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.712523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.712800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.712836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.713106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.713143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.713376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.713413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.713692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.713725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.714017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.714050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.714347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.714382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.714650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.714695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.714985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.715021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.715283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.715319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.715591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.715627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.715870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.715904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.716032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.716066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.716298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.716335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.716625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.716663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.716927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.716960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.717163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.717198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.717415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.717450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.717755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.717792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.717921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.717956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.718239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.718278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.718519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.718554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.718823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.718857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.718976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.719008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.719158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.719190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.719483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.719519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.719724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.719761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.720024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.720058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.720242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.109 [2024-11-20 10:44:10.720278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.109 qpair failed and we were unable to recover it. 00:26:30.109 [2024-11-20 10:44:10.720531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.720564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.720710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.720752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.721032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.721066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.721325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.721364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.721653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.721688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.721981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.722017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.722284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.722319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.722519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.722553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.722831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.722866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.723054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.723092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.723283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.723318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.723574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.723608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.723908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.723942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.724226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.724263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.724542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.724577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.724927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.724962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.725186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.725239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.725555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.725589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.725880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.725914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.726128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.726164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.726460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.726496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.726784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.726818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.727022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.727057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.727278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.727314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.727590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.727629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.727910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.727945] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.728129] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.728163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.728432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.728467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.728738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.728780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.729081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.729116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.729396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.729433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.110 [2024-11-20 10:44:10.729649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.110 [2024-11-20 10:44:10.729685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.110 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.729934] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.729968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.730268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.730304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.730435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.730471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.730752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.730786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.731060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.731094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.731300] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.731335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.731535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.731570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.731825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.731860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.731985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.732019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.732296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.732332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.732639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.732675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.732954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.732989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.733270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.733306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.733586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.733622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.733766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.733800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.734055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.734090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.734345] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.734381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.734566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.734613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.734875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.734911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.735142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.735178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.735340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.735375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.735579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.735618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.735833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.735871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.736078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.736113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.736319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.736356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.736579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.736615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.736749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.736793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.736983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.737017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.737239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.737273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.737483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.111 [2024-11-20 10:44:10.737516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.111 qpair failed and we were unable to recover it. 00:26:30.111 [2024-11-20 10:44:10.737794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.737830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.738116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.738151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.738464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.738501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.738706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.738740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.739046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.739081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.739376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.739414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.739547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.739582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.739733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.739775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.739932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.739966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.740245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.740280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.740433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.740472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.740730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.740763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.741047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.741083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.741374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.741411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.741553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.741588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.741922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.741957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.742169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.742214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.742367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.742403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.742607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.742642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.742691] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf90af0 (9): Bad file descriptor 00:26:30.112 [2024-11-20 10:44:10.743056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.743136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.743337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.743390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.743682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.743719] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.743930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.743965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.744253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.744289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.744422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.744457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.744658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.744692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.744901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.744947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.112 qpair failed and we were unable to recover it. 00:26:30.112 [2024-11-20 10:44:10.745222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.112 [2024-11-20 10:44:10.745258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.745468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.745502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.745680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.745714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.745990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.746024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.746256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.746292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.746522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.746556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.746773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.746806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.747017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.747052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.747311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.747346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.747472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.747505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.747704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.747737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.747922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.747955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.748170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.748216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.748441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.748476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.748618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.748651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.748848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.748882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.749070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.749106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.749370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.749408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.749633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.749667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.749901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.749935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.750217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.750295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.750453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.750489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.750642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.750677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.750974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.751009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.751294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.751330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.751605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.751638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.751910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.751943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.752237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.752272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.752546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.752580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.752715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.752748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.752890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.752923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.113 [2024-11-20 10:44:10.753197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.113 [2024-11-20 10:44:10.753244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.113 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.753396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.753430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.753733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.753777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.753990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.754024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.754285] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.754320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.754472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.754506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.754762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.754796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.755003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.755036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.755332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.755367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.755564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.755599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.755802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.755835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.756039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.756072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.756350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.756386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.756579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.756612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.756720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.756754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.757033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.757067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.757339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.757375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.757572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.757607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.757797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.757832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.758111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.758145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.758384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.758420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.758628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.758661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.758799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.758834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.759033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.759067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.759322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.759358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.759571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.759606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.759851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.759885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.760086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.760120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.760399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.760434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.760747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.114 [2024-11-20 10:44:10.760782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.114 qpair failed and we were unable to recover it. 00:26:30.114 [2024-11-20 10:44:10.761042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.761076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.761276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.761312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.761528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.761562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.761761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.761794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.762037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.762072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.762373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.762410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.762548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.762581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.762784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.762818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.763094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.763128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.763344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.763382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.763538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.763572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.763872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.763907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.764218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.764259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.764449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.764484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.764698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.764732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.764984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.765018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.765215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.765251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.765508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.765543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.765797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.765831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.766082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.766117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.766353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.766391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.766699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.766734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.766968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.767003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.767130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.767165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.767383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.767419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.767722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.767756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.767964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.767999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.768147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.768182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.768425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.768462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.768596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.768629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.115 [2024-11-20 10:44:10.768845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.115 [2024-11-20 10:44:10.768880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.115 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.769076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.769110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.769255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.769290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.769548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.769582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.769850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.769885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.770183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.770226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.770370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.770408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.770543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.770577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.770769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.770805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.771052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.771088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.771296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.771333] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.771527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.771562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.771722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.771759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.771956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.771993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.772270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.772307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.772586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.772620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.772935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.772970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.773248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.773284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.773571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.773606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.773838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.773871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.774150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.774184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.774411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.774446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.774600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.774640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.774859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.774893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.775192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.775237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.775446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.775482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.775771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.775806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.776009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.776043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.776174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.776218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.776467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.776500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.776656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.776691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.776837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.116 [2024-11-20 10:44:10.776872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.116 qpair failed and we were unable to recover it. 00:26:30.116 [2024-11-20 10:44:10.776985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.777019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.777213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.777250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.777510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.777545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.777745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.777778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.778045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.778080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.778299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.778336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.778596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.778631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.778963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.778998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.779254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.779290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.779570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.779605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.779812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.779847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.780055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.780089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.780289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.780325] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.780606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.780640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.780870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.780905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.781188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.781234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.781440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.781476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.781630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.781665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.781848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.781883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.782077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.782111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.782366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.782403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.782603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.782636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.782782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.782816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.783012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.783047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.783185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.783247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.783445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.783479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.783680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.783715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.783999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.784035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.784237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.784273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.784503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.784536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.784739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.784780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.785065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.785099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.785302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.785337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.117 qpair failed and we were unable to recover it. 00:26:30.117 [2024-11-20 10:44:10.785538] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.117 [2024-11-20 10:44:10.785573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-11-20 10:44:10.785724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-11-20 10:44:10.785758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-11-20 10:44:10.785962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-11-20 10:44:10.785996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-11-20 10:44:10.786254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-11-20 10:44:10.786291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-11-20 10:44:10.786494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-11-20 10:44:10.786527] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-11-20 10:44:10.786719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-11-20 10:44:10.786754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-11-20 10:44:10.786948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-11-20 10:44:10.786981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-11-20 10:44:10.787192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-11-20 10:44:10.787235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-11-20 10:44:10.787420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-11-20 10:44:10.787453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-11-20 10:44:10.787730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-11-20 10:44:10.787765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-11-20 10:44:10.788054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-11-20 10:44:10.788089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-11-20 10:44:10.788292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-11-20 10:44:10.788327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-11-20 10:44:10.788600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-11-20 10:44:10.788634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-11-20 10:44:10.788924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-11-20 10:44:10.788958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.118 [2024-11-20 10:44:10.789186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.118 [2024-11-20 10:44:10.789229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.118 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.789485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.789520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.789791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.789825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.790050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.790084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.790327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.790364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.790661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.790696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.790917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.790951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.791156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.791191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.791438] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.791473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.791773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.791807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.791998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.792038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.792229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.792264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.792546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.792581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.792809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.792843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.793053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.793086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.793231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.793267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.793563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.793597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.793801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.793835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.794099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.794133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.794259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.794294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.794491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.794524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.794640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.794672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.794875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.794909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.795210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.795245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.795534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.795568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.795792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.795827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.796028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.399 [2024-11-20 10:44:10.796063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.399 qpair failed and we were unable to recover it. 00:26:30.399 [2024-11-20 10:44:10.796315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.796351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.796543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.796577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.796782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.796816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.797097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.797131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.797411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.797448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.797690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.797724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.797983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.798017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.798241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.798277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.798559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.798593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.798899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.798932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.799140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.799174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.799343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.799378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.799603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.799638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.799860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.799894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.800078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.800112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.800295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.800330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.800528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.800563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.800766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.800800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.801054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.801089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.801309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.801345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.801476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.801510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.801761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.801796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.802087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.802121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.802328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.802370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.802634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.802668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.802870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.802905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.803032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.803067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.803210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.803246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.803445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.803479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.803731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.803765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.804068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.804102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.804302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.804338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.804490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.804524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.804723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.400 [2024-11-20 10:44:10.804757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.400 qpair failed and we were unable to recover it. 00:26:30.400 [2024-11-20 10:44:10.805018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.805052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.805255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.805292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.805498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.805532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.805761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.805796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.806083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.806117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.806305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.806342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.806616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.806651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.806861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.806895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.807099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.807133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.807337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.807373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.807523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.807558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.807763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.807798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.807926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.807961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.808194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.808237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.808429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.808464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.808740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.808775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.809035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.809069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.809348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.809383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.809586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.809620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.809765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.809800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.810074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.810108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.810249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.810286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.810426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.810461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.810737] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.810770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.810966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.811000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.811132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.811165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.811424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.811459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.811720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.811754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.811892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.811928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.812164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.812211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.812537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.812572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.812760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.812794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.812995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.813029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.813215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.813250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.813439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.813474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.813587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.813621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.813803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.401 [2024-11-20 10:44:10.813839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.401 qpair failed and we were unable to recover it. 00:26:30.401 [2024-11-20 10:44:10.814039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.814073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.814235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.814271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.814569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.814604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.814801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.814835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.815019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.815053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.815254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.815291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.815563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.815597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.815803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.815836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.815984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.816018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.816273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.816308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.816503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.816537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.816761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.816797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.817050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.817084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.817269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.817306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.817499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.817534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.817789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.817825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.818099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.818133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.818330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.818366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.818505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.818539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.818750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.818785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.818932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.818967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.819095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.819133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.819354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.819387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.819513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.819544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.819660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.819691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.819905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.819939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.820139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.820174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.820328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.820362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.820644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.820679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.820874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.820907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.821144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.821178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.821320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.821355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.821543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.821588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.402 qpair failed and we were unable to recover it. 00:26:30.402 [2024-11-20 10:44:10.821860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.402 [2024-11-20 10:44:10.821893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.822033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.822068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.822271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.822306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.822434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.822465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.822598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.822631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.822748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.822782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.822969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.823003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.823197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.823242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.823392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.823425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.823565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.823599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.823781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.823815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.823950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.823985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.824181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.824226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.824419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.824453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.824655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.824690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.824889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.824924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.825049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.825084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.825227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.825263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.825464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.825499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.825709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.825743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.825885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.825919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.826103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.826137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.826400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.826435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.826616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.826651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.826905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.826938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.827224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.827260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.827450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.827486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.827620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.827653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.827929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.827963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.828150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.828185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.828465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.828499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.828647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.828681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.828974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.829008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.829218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.829254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.829373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.829407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.829589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.829624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.829739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.829772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.403 qpair failed and we were unable to recover it. 00:26:30.403 [2024-11-20 10:44:10.830037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.403 [2024-11-20 10:44:10.830071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.830200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.830246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.830454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.830495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.830621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.830655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.830917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.830951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.831149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.831182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.831318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.831353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.831467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.831500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.831701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.831735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.831869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.831903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.832166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.832210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.832427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.832461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.832646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.832680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.832895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.832929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.833132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.833166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.833372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.833408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.833545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.833579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.833708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.833742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.833935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.833969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.834153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.834187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.834427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.834462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.834649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.834682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.834822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.834856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.834984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.835017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.835211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.835247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.835444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.835478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.835747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.835781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.835960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.404 [2024-11-20 10:44:10.835994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.404 qpair failed and we were unable to recover it. 00:26:30.404 [2024-11-20 10:44:10.836190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.836234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.836514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.836548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.836741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.836775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.836976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.837009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.837267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.837302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.837486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.837519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.837714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.837749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.837885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.837919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.838050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.838083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.838282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.838318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.838460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.838492] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.838615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.838649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.838848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.838882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.839076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.839108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.839232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.839274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.839477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.839511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.839787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.839822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.839943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.839977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.840179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.840220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.840346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.840380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.840559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.840593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.840850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.840883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.840997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.841032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.841238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.841274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.841573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.841607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.841820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.841854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.841983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.842017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.842163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.842198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.842370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.842404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.842594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.842628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.842838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.842871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.843013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.843048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.843178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.843222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.843522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.843557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.843751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.843785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.843979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.844013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.844212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.844247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.405 [2024-11-20 10:44:10.844500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.405 [2024-11-20 10:44:10.844535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.405 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.844715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.844748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.844927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.844961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.845155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.845188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.845450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.845486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.845625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.845659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.845806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.845839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.845977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.846012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.846138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.846172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.846443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.846521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.846811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.846887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.847099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.847137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.847351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.847388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.847534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.847569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.847682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.847716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.847940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.847974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.848123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.848158] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.848351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.848397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.848673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.848707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.848900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.848933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.849072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.849111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.849246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.849282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.849515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.849548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.849798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.849835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.850134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.850169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.850520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.850579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.850851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.850886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.851070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.851105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.851382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.851418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.851626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.851659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.851848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.851883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.852051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.852086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.852358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.852393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.852575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.406 [2024-11-20 10:44:10.852609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.406 qpair failed and we were unable to recover it. 00:26:30.406 [2024-11-20 10:44:10.852739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.852772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.853041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.853076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.853213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.853247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.853362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.853395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.853540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.853574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.853687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.853720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.853897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.853931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.854178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.854220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.854399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.854432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.854549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.854582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.854834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.854872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.855012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.855045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.855348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.855383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.855630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.855663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.855780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.855813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.855933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.855965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.856106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.856139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.856342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.856377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.856560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.856594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.856708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.856739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.856921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.856953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.857131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.857165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.857328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.857363] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.857579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.857612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.857734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.857772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.857970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.858004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.858118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.858151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.858356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.858392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.858584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.858617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.858865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.858900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.859031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.859064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.859259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.859296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.859419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.859453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.859636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.859670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.859784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.859821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.860036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.860069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.860341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.860378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.407 [2024-11-20 10:44:10.860517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.407 [2024-11-20 10:44:10.860566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.407 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.860775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.860809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.860932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.860965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.861109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.861143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.861346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.861381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.861528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.861571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.861818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.861852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.861963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.861997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.862199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.862247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.862429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.862463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.862669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.862703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.862950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.862983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.863120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.863153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.863361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.863395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.863540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.863575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.863702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.863735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.863990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.864024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.864155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.864190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.864472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.864505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.864632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.864666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.864802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.864836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.865034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.865066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.865314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.865349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.865625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.865659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.865881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.865914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.866040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.866074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.866280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.866315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.866463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.866497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.866684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.866717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.866847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.866880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.867065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.867098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.867277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.867312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.867557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.867590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.867784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.867816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.867992] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.408 [2024-11-20 10:44:10.868025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.408 qpair failed and we were unable to recover it. 00:26:30.408 [2024-11-20 10:44:10.868209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.868244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.868446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.868480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.868662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.868695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.868944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.868977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.869251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.869286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.869502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.869549] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.869681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.869714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.869905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.869938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.870073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.870107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.870243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.870279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.870490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.870523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.870719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.870755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.870880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.870914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.871096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.871129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.871310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.871345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.871600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.871634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.871902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.871940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.872190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.872237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.872426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.872460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.872590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.872624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.872907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.872940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.873113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.873146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.873270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.873304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.873481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.873514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.873756] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.873789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.873983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.874017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.874150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.874184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.874320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.874354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.874481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.874514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.874693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.874726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.875004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.875037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.875314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.875351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.875569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.875603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.875781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.875815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.876003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.876036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.409 qpair failed and we were unable to recover it. 00:26:30.409 [2024-11-20 10:44:10.876232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.409 [2024-11-20 10:44:10.876266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.876540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.876573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.876753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.876786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.876911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.876944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.877134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.877168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.877289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.877323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.877503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.877536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.877752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.877785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.877921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.877955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.878136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.878169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.878384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.878424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.878547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.878581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.878700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.878732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.878862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.878893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.879085] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.879119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.879332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.879367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.879482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.879514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.879704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.879736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.879930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.879964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.880076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.880109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.880293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.880327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.880523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.880555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.880851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.880884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.881088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.881121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.881385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.881420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.881552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.881584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.881829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.881862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.882048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.882083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.882295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.882331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.882451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.882485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.882662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.882694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.882805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.882837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.882958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.882995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.883200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.883249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.883458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.883491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.883662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.883696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.883875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.883907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.884217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.410 [2024-11-20 10:44:10.884298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.410 qpair failed and we were unable to recover it. 00:26:30.410 [2024-11-20 10:44:10.884461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.884498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.884700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.884733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.884855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.884888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.885017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.885050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.885240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.885275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.885386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.885421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.885610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.885643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.885768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.885802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.885993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.886027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.886143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.886177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.886317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.886351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.886465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.886497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.886672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.886705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.886842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.886875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.887126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.887159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.887359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.887396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.887526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.887558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.887694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.887726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.887834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.887867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.888073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.888105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.888289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.888323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.888442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.888475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.888595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.888629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.888744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.888777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.888904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.888937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.889057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.889092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.889267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.889309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.889430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.889462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.889585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.889619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.889805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.889838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.890027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.890060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.890240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.890277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.890459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.890493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.890674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.890707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.890814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.890848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.891090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.891124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.891317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.891352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.891546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.411 [2024-11-20 10:44:10.891579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.411 qpair failed and we were unable to recover it. 00:26:30.411 [2024-11-20 10:44:10.891696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.891729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.891991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.892025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.892237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.892273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.892418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.892452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.892561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.892594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.892765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.892798] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.892981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.893014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.893193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.893234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.893358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.893391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.893597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.893631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.893851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.893885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.894062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.894095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.894280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.894315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.894450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.894483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.894665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.894698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.894873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.894906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.895031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.895064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.895187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.895230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.895367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.895400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.895591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.895625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.895732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.895764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.895939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.895972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.896218] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.896253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.896429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.896460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.896727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.896760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.896899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.896931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.897117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.897150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.897399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.897434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.897683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.897715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.897912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.897946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.898133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.898166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.898369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.898404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.898694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.898726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.899002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.899036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.899286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.899322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.899525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.899557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.899730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.899764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.899891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.899924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.412 [2024-11-20 10:44:10.900047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.412 [2024-11-20 10:44:10.900079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.412 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.900354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.900389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.900597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.900629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.900833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.900867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.900990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.901023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.901137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.901169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.901383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.901416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.901549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.901582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.901699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.901731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.901937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.901970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.902159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.902191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.902397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.902431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.902685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.902718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.902840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.902873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.903006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.903039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.903254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.903289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.903468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.903501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.903628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.903661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.903839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.903877] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.904057] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.904090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.904271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.904306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.904500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.904533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.904721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.904753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.904933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.904966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.905084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.905118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.905294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.905328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.413 [2024-11-20 10:44:10.905505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.413 [2024-11-20 10:44:10.905537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.413 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.905645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.905679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.905857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.905890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.906062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.906095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.906275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.906310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.906524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.906557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.906751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.906784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.906898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.906932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.907046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.907080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.907253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.907288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.907411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.907444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.907549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.907583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.907691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.907724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.907973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.908006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.908186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.908230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.908403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.908436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.908629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.908661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.908861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.908894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.909088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.909121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.909238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.909273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.909417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.909450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.909580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.909613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.909739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.909772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.909941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.909974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.910086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.910118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.910357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.910393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.910630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.910663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.910787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.910821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.910928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.910959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.911103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.911137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.911318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.911353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.911542] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.911574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.911745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.911778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.912044] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.912083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.912264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.912299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.912446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.912480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.912609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.912642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.414 qpair failed and we were unable to recover it. 00:26:30.414 [2024-11-20 10:44:10.912833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.414 [2024-11-20 10:44:10.912866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.913053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.913086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.913198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.913240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.913348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.913380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.913563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.913597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.913771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.913804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.914058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.914092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.914282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.914317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.914441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.914473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.914714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.914748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.914931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.914964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.915212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.915246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.915355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.915388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.915497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.915530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.915724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.915755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.915956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.915989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.916173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.916215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.916343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.916375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.916616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.916649] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.916827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.916860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.917043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.917076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.917321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.917354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.917491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.917525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.917642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.917680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.917867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.917900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.918011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.918043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.918245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.918279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.918402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.918436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.918630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.918663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.918775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.918807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.918981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.919014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.919190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.919234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.919341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.919374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.919561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.919594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.919711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.919744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.919997] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.920030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.920158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.920190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.920335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.920369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.415 [2024-11-20 10:44:10.920630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.415 [2024-11-20 10:44:10.920663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.415 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.920870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.920903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.921099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.921132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.921400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.921436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.921629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.921672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.921848] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.921880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.922075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.922108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.922327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.922362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.922639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.922672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.922870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.922903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.923009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.923042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.923284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.923319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.923508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.923541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.923742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.923775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.923988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.924022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.924141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.924175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.924308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.924341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.924519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.924552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.924729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.924762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.924938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.924971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.925088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.925121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.925239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.925274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.925474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.925507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.925758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.925790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.925900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.925935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.926115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.926147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.926290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.926335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.926576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.926609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.926728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.926761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.927024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.927056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.927302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.927337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.927529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.927562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.927766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.927799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.927920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.927953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.928137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.928170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.928419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.928453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.928708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.928740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.929025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.929058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.416 [2024-11-20 10:44:10.929172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.416 [2024-11-20 10:44:10.929214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.416 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.929413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.929446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.929575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.929607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.929721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.929754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.930020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.930054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.930190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.930248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.930474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.930507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.930638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.930671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.930862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.930894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.931073] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.931105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.931225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.931260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.931397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.931429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.931695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.931728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.931910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.931943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.932120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.932152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.932344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.932384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.932491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.932542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.932818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.932851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.933036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.933069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.933253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.933287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.933389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.933422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.933532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.933565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.933776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.933809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.933981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.934014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.934123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.934156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.934360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.934394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.934566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.934598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.934705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.934738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.934869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.934902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.935220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.935293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.935593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.935630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.935772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.935805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.936009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.936042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.936280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.936314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.936440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.936473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.936581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.936614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.936851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.936883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.937068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.937101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.417 [2024-11-20 10:44:10.937342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.417 [2024-11-20 10:44:10.937378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.417 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.937555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.937587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.937724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.937757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.937940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.937973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.938107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.938149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.938402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.938436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.938554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.938586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.938690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.938724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.938900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.938932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.939144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.939177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.939313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.939347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.939489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.939523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.939708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.939741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.939918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.939951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.940060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.940093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.940307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.940342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.940475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.940507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.940792] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.940826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.941006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.941039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.941230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.941265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.941504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.941537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.941677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.941708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.941889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.941922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.942097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.942130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.942317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.942351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.942600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.942633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.942833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.942866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.943040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.943072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.943271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.943305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.943480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.943513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.943696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.943728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.943935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.943968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.944212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.944246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.418 [2024-11-20 10:44:10.944381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.418 [2024-11-20 10:44:10.944413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.418 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.944524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.944557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.944743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.944777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.944954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.944986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.945110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.945142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.945369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.945403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.945572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.945604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.945795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.945827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.945951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.945984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.946088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.946121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.946295] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.946330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.946462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.946500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.946697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.946729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.946944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.946976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.947177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.947218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.947352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.947384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.947503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.947536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.947724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.947757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.947882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.947914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.948101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.948134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.948305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.948340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.948480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.948513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.948634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.948668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.948861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.948894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.949007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.949040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.949221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.949255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.949434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.949467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.949597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.949630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.949752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.949786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.949968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.949999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.950182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.950239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.950426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.950459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.950590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.950624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.950740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.950772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.950906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.950939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.951055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.951087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.951269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.951304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.951439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.951471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.419 [2024-11-20 10:44:10.951593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.419 [2024-11-20 10:44:10.951625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.419 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.951813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.951845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.952041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.952075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.952335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.952369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.952573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.952605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.952786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.952819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.953025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.953057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.953197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.953240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.953365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.953397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.953590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.953623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.953805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.953837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.953979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.954012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.954284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.954319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.954594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.954638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.954770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.954803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.955022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.955054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.955244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.955279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.955477] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.955510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.955712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.955744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.955983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.956017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.956221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.956255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.956495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.956528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.956665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.956697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.956817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.956851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.957056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.957088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.957334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.957369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.957478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.957511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.957629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.957662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.957843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.957876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.958092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.958125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.958232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.958266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.958397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.958428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.958566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.958599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.958807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.958839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.958952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.958984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.959164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.959197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.959352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.959386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.959599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.420 [2024-11-20 10:44:10.959631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.420 qpair failed and we were unable to recover it. 00:26:30.420 [2024-11-20 10:44:10.959744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.959778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.959890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.959923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.960186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.960231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.960474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.960506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.960713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.960746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.960867] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.960899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.961082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.961115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.961410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.961446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.961559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.961592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.961706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.961738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.961930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.961964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.962084] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.962117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.962294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.962328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.962451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.962484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.962659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.962691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.962862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.962901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.963015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.963047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.963239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.963273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.963512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.963544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.963674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.963706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.963915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.963947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.964127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.964159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.964341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.964374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.964554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.964587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.964761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.964793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.964966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.964999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.965192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.965237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.965472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.965506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.965679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.965711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.965904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.965937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.966142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.966175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.966368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.966402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.966512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.966544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.966660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.966693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.966953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.966986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.967226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.967261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.967449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.967481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.967681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.421 [2024-11-20 10:44:10.967715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.421 qpair failed and we were unable to recover it. 00:26:30.421 [2024-11-20 10:44:10.967953] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.967986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.968098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.968130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.968266] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.968300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.968483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.968515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.968748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.968821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.969107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.969145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.969320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.969356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.969537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.969570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.969697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.969729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.969853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.969884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.970018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.970051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.970226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.970260] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.970373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.970404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.970585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.970618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.970736] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.970767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.970911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.970943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.971063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.971097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.971271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.971314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.971579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.971612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.971780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.971812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.971927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.971958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.972074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.972108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.972351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.972388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.972573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.972605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.972773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.972806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.973043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.973076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.973373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.973410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.973695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.973728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.973847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.973879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.974079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.974111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.974363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.974398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.974643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.974676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.974799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.974832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.975006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.975038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.975158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.975192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.975384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.975416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.975625] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.422 [2024-11-20 10:44:10.975659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.422 qpair failed and we were unable to recover it. 00:26:30.422 [2024-11-20 10:44:10.975780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.975813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.975988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.976020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.976262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.976298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.976540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.976573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.976757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.976790] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.976975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.977008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.977140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.977173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.977302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.977339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.977599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.977631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.977812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.977846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.977976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.978009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.978200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.978254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.978369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.978403] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.978580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.978612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.978800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.978834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.979012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.979044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.979249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.979283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.979541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.979574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.979755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.979788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.979962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.979994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.980109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.980142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.980366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.980401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.980574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.980607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.980813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.980846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.981043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.981076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.981222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.981256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.981378] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.981410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.981607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.981641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.981927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.981960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.982141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.982173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.982399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.982433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.982557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.982589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.982773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.982806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.982933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.982964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.423 qpair failed and we were unable to recover it. 00:26:30.423 [2024-11-20 10:44:10.983235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.423 [2024-11-20 10:44:10.983270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.983388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.983420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.983660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.983692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.983818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.983850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.984033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.984066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.984188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.984229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.984411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.984445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.984684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.984717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.984921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.984954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.985215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.985248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.985439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.985472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.985652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.985683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.985809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.985842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.985961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.986001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.986176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.986217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.986392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.986424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.986680] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.986714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.986902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.986935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.987200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.987245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.987431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.987463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.987578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.987611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.987791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.987823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.988062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.988095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.988349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.988384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.988623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.988656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.988835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.988868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.988986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.989017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.989310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.989345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.989591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.989624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.989749] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.989782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.989914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.989946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.990239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.990273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.990379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.990410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.990593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.990626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.990798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.990831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.991010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.991042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.991224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.991258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.991435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.991468] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.424 qpair failed and we were unable to recover it. 00:26:30.424 [2024-11-20 10:44:10.991653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.424 [2024-11-20 10:44:10.991685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.991920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.991952] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.992138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.992172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.992391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.992423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.992633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.992667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.992849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.992883] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.993053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.993086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.993223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.993257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.993496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.993530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.993647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.993679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.993806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.993839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.994033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.994066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.994246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.994281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.994453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.994486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.994734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.994767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.995005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.995043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.995236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.995271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.995535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.995568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.995696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.995730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.995989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.996021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.996267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.996302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.996549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.996582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.996719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.996752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.997010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.997042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.997164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.997197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.997391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.997425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.997547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.997580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.997700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.997732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.997904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.997937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.998055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.998088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.998217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.998251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.998463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.998495] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.998672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.998706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.998940] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.998972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.999088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.999120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.999359] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.999395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.999651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.999683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:10.999869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.425 [2024-11-20 10:44:10.999901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.425 qpair failed and we were unable to recover it. 00:26:30.425 [2024-11-20 10:44:11.000075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.000108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.000290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.000324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.000563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.000596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.000716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.000750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.000887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.000920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.001031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.001064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.001181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.001219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.001487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.001520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.001691] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.001723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.001914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.001947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.002128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.002161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.002432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.002466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.002721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.002753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.002970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.003002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.003242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.003277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.003387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.003420] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.003599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.003631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.003759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.003799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.003923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.003956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.004132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.004168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.004363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.004397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.004518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.004551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.004690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.004723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.004902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.004935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.005139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.005171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.005369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.005404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.005664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.005696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.005950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.005984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.006162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.006194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.006389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.006423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.006674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.006706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.006951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.006984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.007194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.007241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.007380] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.007412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.007544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.007577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.007813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.007846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.008028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.008061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.008241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.426 [2024-11-20 10:44:11.008276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.426 qpair failed and we were unable to recover it. 00:26:30.426 [2024-11-20 10:44:11.008468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.008501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.008674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.008706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.008836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.008868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.009051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.009084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.009278] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.009313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.009586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.009618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.009838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.009871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.010003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.010036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.010219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.010252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.010371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.010404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.010587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.010620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.010739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.010771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.010886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.010920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.011214] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.011247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.011353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.011386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.011490] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.011522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.011735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.011768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.011889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.011922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.012039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.012072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.012249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.012288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.012555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.012588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.012772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.012805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.012918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.012951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.013240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.013275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.013533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.013566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.013847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.013881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.014087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.014119] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.014358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.014392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.014577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.014610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.014726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.014759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.014880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.014912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.015110] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.015142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.427 [2024-11-20 10:44:11.015354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.427 [2024-11-20 10:44:11.015388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.427 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.015576] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.015610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.015849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.015881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.016120] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.016153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.016287] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.016321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.016581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.016615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.016812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.016845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.017041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.017075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.017197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.017243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.017451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.017484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.017703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.017736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.017846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.017880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.018049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.018083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.018281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.018316] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.018457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.018491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.018664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.018696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.018897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.018931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.019166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.019199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.019391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.019424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.019610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.019643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.019822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.019855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.020058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.020090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.020299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.020334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.020522] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.020554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.020742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.020776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.020911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.020944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.021155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.021188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.021323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.021362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.021534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.021567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.021748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.021781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.021893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.021926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.022190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.022237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.022425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.022459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.022580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.022612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.022801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.022834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.023092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.023125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.023314] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.023349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.023528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.023561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.428 [2024-11-20 10:44:11.023695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.428 [2024-11-20 10:44:11.023728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.428 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.023852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.023884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.024014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.024048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.024246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.024281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.024456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.024488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.024752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.024785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.025048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.025082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.025326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.025360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.025540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.025572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.025751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.025784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.025905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.025938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.026128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.026161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.026290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.026323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.026524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.026558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.026817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.026849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.027030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.027062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.027187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.027231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.027410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.027443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.027681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.027713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.027885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.027919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.028098] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.028130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.028319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.028355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.028536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.028568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.028743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.028776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.028976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.029008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.029188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.029227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.029367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.029400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.029507] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.029538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.029710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.029742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.029880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.029919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.030097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.030130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.030418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.030453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.030643] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.030676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.030806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.030839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.031010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.031041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.031220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.031254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.031427] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.031460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.031574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.429 [2024-11-20 10:44:11.031606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.429 qpair failed and we were unable to recover it. 00:26:30.429 [2024-11-20 10:44:11.031728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.031761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.032037] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.032070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.032279] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.032314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.032430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.032462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.032701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.032734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.032932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.032965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.033096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.033129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.033313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.033347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.033632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.033664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.033839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.033872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.034049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.034081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.034297] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.034332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.034521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.034554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.034797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.034830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.035015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.035047] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.035225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.035259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.035449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.035482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.035656] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.035689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.035868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.035901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.036138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.036171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.036392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.036426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.036658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.036692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.036820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.036853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.036963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.036994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.037176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.037225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.037419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.037452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.037711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.037744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.037865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.037899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.038160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.038193] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.038389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.038423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.038659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.038692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.038980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.039020] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.039216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.039249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.039514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.039547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.039718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.039751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.039868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.039901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.040076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.040109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.040288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.430 [2024-11-20 10:44:11.040322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.430 qpair failed and we were unable to recover it. 00:26:30.430 [2024-11-20 10:44:11.040526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.040560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.040742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.040775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.040907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.040940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.041138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.041171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.041363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.041396] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.041513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.041546] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.041823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.041855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.041984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.042017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.042286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.042320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.042436] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.042469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.042598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.042632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.042742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.042774] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.042892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.042926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.043092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.043125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.043235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.043268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.043383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.043415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.043589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.043622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.043794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.043827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.044004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.044038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.044224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.044258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.044493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.044565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.044738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.044809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.045006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.045043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.045158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.045191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.045383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.045418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.045534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.045564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.045738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.045771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.045877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.045907] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.046012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.046045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.046242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.046276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.046539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.046572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.046755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.046788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.047028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.047061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.047243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.047287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.047409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.047443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.047572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.047604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.047819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.047852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.048093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.431 [2024-11-20 10:44:11.048127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.431 qpair failed and we were unable to recover it. 00:26:30.431 [2024-11-20 10:44:11.048367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.048402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.048665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.048698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.048804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.048838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.048967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.049001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.049124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.049157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.049414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.049449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.049633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.049666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.049841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.049874] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.050065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.050097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.050228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.050262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.050434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.050467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.050646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.050680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.050799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.050831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.051006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.051038] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.051152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.051186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.051324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.051356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.051578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.051610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.051740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.051773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.051947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.051979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.052165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.052198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.052451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.052483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.052600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.052633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.052765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.052810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.052942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.052976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.053163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.053196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.053340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.053374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.053608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.053641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.053922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.053955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.054127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.054160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.054293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.054328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.054469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.054503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.054620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.054652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.054825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.054858] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.055035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.432 [2024-11-20 10:44:11.055068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.432 qpair failed and we were unable to recover it. 00:26:30.432 [2024-11-20 10:44:11.055248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.055284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.055472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.055506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.055718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.055751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.055991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.056024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.056217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.056251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.056496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.056528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.056648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.056682] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.056819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.056851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.056971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.057003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.057271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.057306] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.057578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.057610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.057746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.057779] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.058036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.058069] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.058176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.058219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.058343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.058376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.058611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.058651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.058833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.058866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.059040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.059073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.059190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.059247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.059486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.059520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.059630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.059662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.059787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.059820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.060001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.060034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.060274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.060308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.060481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.060515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.060693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.060726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.060835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.060868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.061054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.061087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.061283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.061317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.061593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.061626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.061734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.061768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.061955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.061989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.062173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.062219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.062419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.062453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.062652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.062686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.062868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.062900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.063021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.063054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.063318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.433 [2024-11-20 10:44:11.063353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.433 qpair failed and we were unable to recover it. 00:26:30.433 [2024-11-20 10:44:11.063615] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.063648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.063827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.063861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.063980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.064014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.064276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.064310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.064484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.064517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.064763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.064797] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.064970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.065003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.065250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.065284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.065467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.065500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.065604] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.065637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.065810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.065842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.065962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.065995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.066170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.066211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.066350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.066382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.066488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.066520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.066707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.066740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.066878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.066911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.067088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.067120] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.067403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.067442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.067565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.067597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.067721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.067754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.067993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.068026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.068198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.068241] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.068414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.068447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.068638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.068670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.068844] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.068878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.069114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.069147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.069421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.069455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.069694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.069726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.069965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.069998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.070264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.070299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.070423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.070462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.070580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.070613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.070838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.070870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.071040] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.071073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.071262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.071297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.071468] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.071500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.071678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.071711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.434 qpair failed and we were unable to recover it. 00:26:30.434 [2024-11-20 10:44:11.071893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.434 [2024-11-20 10:44:11.071926] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.072102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.072135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.072258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.072293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.072469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.072502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.072618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.072651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.072780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.072813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.073002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.073036] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.073155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.073188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.073371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.073405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.073591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.073625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.073863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.073896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.074068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.074101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.074283] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.074318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.074443] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.074476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.074581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.074614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.074787] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.074822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.075008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.075041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.075234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.075268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.075399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.075431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.075539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.075573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.075710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.075755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.075874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.075906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.076165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.076199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.076342] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.076375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.076628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.076661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.076840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.076873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.077061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.077094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.077269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.077317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.077498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.077532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.077701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.077734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.077869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.077902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.078083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.078117] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.078364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.078402] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.078595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.078636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.078754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.078789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.078898] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.078931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.079187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.079232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.079430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.435 [2024-11-20 10:44:11.079463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.435 qpair failed and we were unable to recover it. 00:26:30.435 [2024-11-20 10:44:11.079587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.079620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.079821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.079854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.080047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.080084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.080348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.080383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.080505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.080537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.080729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.080762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.080952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.080985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.081094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.081127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.081393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.081427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.081555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.081587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.081793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.081825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.081942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.081974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.082252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.082287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.082551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.082584] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.082755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.082789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.082923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.082956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.083156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.083190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.083328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.083361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.083486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.083519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.083711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.083744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.083885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.083917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.084127] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.084160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.084360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.084400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.084582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.084615] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.084805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.084838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.084949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.084981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.085153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.085186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.085315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.085348] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.085549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.085582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.085705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.085737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.085930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.085962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.086075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.086114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.086377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.086412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.436 qpair failed and we were unable to recover it. 00:26:30.436 [2024-11-20 10:44:11.086599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.436 [2024-11-20 10:44:11.086632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.086869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.086902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.087075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.087108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.087324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.087359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.087622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.087655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.087862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.087895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.088036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.088068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.088271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.088305] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.088492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.088524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.088652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.088685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.088859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.088893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.089079] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.089112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.089242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.089276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.089464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.089497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.089609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.089642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.089763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.089796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.089972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.090008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.090199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.090244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.090440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.090474] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.090601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.090633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.090810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.090841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.090965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.090998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.091119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.091152] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.091349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.091384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.091508] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.091540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.091731] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.091764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.092006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.092039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.092222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.092256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.092375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.092407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.092587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.092631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.092815] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.092849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.092973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.093006] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.093116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.093148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.093303] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.093337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.093521] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.093553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.093677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.093709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.093847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.093879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.094181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.437 [2024-11-20 10:44:11.094223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.437 qpair failed and we were unable to recover it. 00:26:30.437 [2024-11-20 10:44:11.094405] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.094438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.094570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.094603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.094773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.094805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.094987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.095018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.095149] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.095180] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.095316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.095350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.095525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.095557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.095745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.095777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.095950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.095983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.096102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.096135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.096252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.096285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.096473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.096507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.096700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.096732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.096839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.096871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.096978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.097009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.097128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.097161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.097350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.097383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.097567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.097598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.097776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.097807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.097932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.097965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.098141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.098173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.098361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.098393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.098567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.098599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.098725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.098757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.098872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.098904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.099019] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.099051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.099222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.099256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.099389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.099422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.099621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.099653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.099894] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.099925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.100055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.100090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.100222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.100264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.100392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.100424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.100598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.100630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.100813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.100845] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.101049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.101082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.101197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.101238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.101368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.438 [2024-11-20 10:44:11.101399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.438 qpair failed and we were unable to recover it. 00:26:30.438 [2024-11-20 10:44:11.101569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-11-20 10:44:11.101602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-11-20 10:44:11.101725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-11-20 10:44:11.101757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-11-20 10:44:11.101881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-11-20 10:44:11.101912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-11-20 10:44:11.102081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-11-20 10:44:11.102114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-11-20 10:44:11.102296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-11-20 10:44:11.102330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-11-20 10:44:11.102461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-11-20 10:44:11.102493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-11-20 10:44:11.102664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-11-20 10:44:11.102697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-11-20 10:44:11.102963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-11-20 10:44:11.102996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-11-20 10:44:11.103170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-11-20 10:44:11.103211] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-11-20 10:44:11.103322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-11-20 10:44:11.103354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-11-20 10:44:11.103484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-11-20 10:44:11.103516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-11-20 10:44:11.103636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-11-20 10:44:11.103668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-11-20 10:44:11.103845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-11-20 10:44:11.103878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-11-20 10:44:11.103998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-11-20 10:44:11.104030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-11-20 10:44:11.104151] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-11-20 10:44:11.104184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-11-20 10:44:11.104311] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-11-20 10:44:11.104344] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.439 [2024-11-20 10:44:11.104484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.439 [2024-11-20 10:44:11.104516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.439 qpair failed and we were unable to recover it. 00:26:30.721 [2024-11-20 10:44:11.104649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.104680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.104800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.104833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.105082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.105114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.105369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.105404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.105612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.105645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.105840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.105873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.106000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.106032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.106277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.106312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.106569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.106602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.106726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.106758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.106879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.106912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.107097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.107129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.107302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.107335] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.107544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.107575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.107813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.107846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.108041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.108074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.108336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.108376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.108584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.108617] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.108818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.108851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.109038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.109071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.109178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.109233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.109411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.109444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.109621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.109652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.109851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.109884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.110063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.110095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.110286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.110324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.110452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.110484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.110758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.110793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.110909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.110941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.111111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.111144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.111286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.111321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.111496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.111530] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.111662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.111694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.111869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.722 [2024-11-20 10:44:11.111902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.722 qpair failed and we were unable to recover it. 00:26:30.722 [2024-11-20 10:44:11.112025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.112057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.112176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.112216] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.112341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.112374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.112613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.112646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.112776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.112809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.113071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.113105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.113290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.113322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.113505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.113538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.113715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.113748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.113928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.113961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.114133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.114166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.114364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.114399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.114639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.114673] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.114843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.114876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.115113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.115145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.115396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.115431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.115709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.115742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.115960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.115994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.116111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.116144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.116426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.116461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.116717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.116749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.116921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.116953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.117134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.117171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.117321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.117354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.117570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.117602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.117775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.117809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.117981] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.118014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.118274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.118309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.118482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.118513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.118753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.118785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.118959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.118991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.119112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.119143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.119326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.119360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.119600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.119633] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.119823] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.119857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.120052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.723 [2024-11-20 10:44:11.120085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.723 qpair failed and we were unable to recover it. 00:26:30.723 [2024-11-20 10:44:11.120264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.120298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.120420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.120453] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.120587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.120620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.120807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.120841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.121023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.121055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.121172] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.121210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.121413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.121447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.121686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.121718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.121962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.121996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.122260] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.122295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.122547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.122581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.122779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.122812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.122994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.123026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.123226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.123261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.123530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.123563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.123747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.123780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.123914] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.123948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.124184] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.124225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.124332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.124365] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.124492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.124526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.124649] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.124681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.124852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.124885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.125011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.125044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.125237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.125271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.125389] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.125422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.125558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.125591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.125800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.125838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.126024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.126057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.126262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.126296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.126403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.126434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.126679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.126713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.126951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.126984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.127170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.127210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.127391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.127424] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.127602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.724 [2024-11-20 10:44:11.127634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.724 qpair failed and we were unable to recover it. 00:26:30.724 [2024-11-20 10:44:11.127807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.127840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.128024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.128056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.128227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.128261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.128444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.128477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.128664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.128697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.128884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.128918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.129099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.129132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.129377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.129411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.129533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.129566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.129741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.129775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.129958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.129991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.130252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.130287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.130481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.130514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.130705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.130737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.130921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.130955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.131220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.131255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.131365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.131397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.131634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.131666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.131908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.131941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.132064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.132096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.132326] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.132362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.132541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.132574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.132751] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.132783] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.132954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.132986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.133159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.133192] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.133373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.133406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.133609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.133643] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.133765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.133799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.134063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.134097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.134288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.134323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.134588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.134621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.134810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.134854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.135052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.135085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.135270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.725 [2024-11-20 10:44:11.135303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.725 qpair failed and we were unable to recover it. 00:26:30.725 [2024-11-20 10:44:11.135478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.135510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.135693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.135726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.135855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.135888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.136132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.136165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.136349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.136384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.136572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.136605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.136802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.136834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.136957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.136990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.137100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.137133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.137369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.137404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.137595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.137627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.137819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.137851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.138038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.138071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.138256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.138291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.138530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.138563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.138825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.138859] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.138985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.139018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.139302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.139337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.139460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.139494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.139678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.139712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.139888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.139921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.140099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.140132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.140396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.140431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.140572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.140604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.140738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.140772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.141017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.141051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.141319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.141353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.141592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.141626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.141800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.141833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.142009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.142041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.142227] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.726 [2024-11-20 10:44:11.142261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.726 qpair failed and we were unable to recover it. 00:26:30.726 [2024-11-20 10:44:11.142445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.142477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.142684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.142718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.143009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.143042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.143222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.143257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.143520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.143553] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.143670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.143703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.143941] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.143980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.144171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.144212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.144481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.144515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.144723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.144755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.145016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.145049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.145241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.145275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.145485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.145518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.145631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.145663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.145859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.145892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.146076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.146109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.146284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.146320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.146448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.146481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.146728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.146762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.146944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.146976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.147246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.147280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.147523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.147556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.147802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.147835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.148009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.148042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.148306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.148341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.148467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.148499] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.148794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.148827] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.148996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.149029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.149265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.149298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.149473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.149505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.149690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.149723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.149905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.149938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.150136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.150169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.150370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.150405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.150513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.727 [2024-11-20 10:44:11.150545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.727 qpair failed and we were unable to recover it. 00:26:30.727 [2024-11-20 10:44:11.150662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.150694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.150957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.150990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.151226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.151261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.151447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.151480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.151602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.151635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.151922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.151956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.152153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.152186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.152319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.152352] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.152541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.152573] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.152762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.152795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.153023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.153056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.153248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.153288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.153464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.153498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.153684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.153717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.153957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.153990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.154113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.154146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.154395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.154429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.154622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.154655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.154775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.154806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.154990] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.155022] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.155221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.155256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.155437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.155471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.155650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.155683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.155788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.155819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.155935] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.155969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.156221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.156256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.156446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.156479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.156719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.156752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.156920] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.156954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.157191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.157253] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.157372] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.157405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.157516] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.157548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.157734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.157768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.157993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.158025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.158232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.158268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.728 qpair failed and we were unable to recover it. 00:26:30.728 [2024-11-20 10:44:11.158463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.728 [2024-11-20 10:44:11.158496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.158639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.158671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.158862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.158895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.159026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.159059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.159344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.159378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.159567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.159600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.159770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.159803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.159983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.160016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.160252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.160285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.160417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.160449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.160568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.160601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.160794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.160829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.161004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.161037] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.161231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.161264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.161385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.161417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.161654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.161687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.161798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.161836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.162027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.162059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.162175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.162246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.162492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.162525] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.162653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.162685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.162794] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.162826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.163007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.163039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.163237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.163272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.163388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.163422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.163554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.163587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.163777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.163809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.163985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.164018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.164213] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.164247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.164437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.164471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.164588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.164621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.164799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.164831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.165017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.165051] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.165165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.165199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.165325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.165357] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.165545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.165578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.729 [2024-11-20 10:44:11.165705] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.729 [2024-11-20 10:44:11.165737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.729 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.165910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.165943] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.166069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.166102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.166376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.166411] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.166585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.166619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.166800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.166833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.167072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.167104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.167304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.167340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.167633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.167667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.167847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.167880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.168061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.168094] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.168299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.168334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.168529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.168562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.168770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.168802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.168985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.169018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.169255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.169289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.169425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.169457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.169641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.169674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.169793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.169825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.170035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.170067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.170329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.170371] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.170492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.170526] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.170707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.170740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.170872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.170904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.171074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.171107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.171317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.171351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.171540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.171572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.171811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.171844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.172035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.172068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.172179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.172221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.172393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.172426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.172553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.172585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.172709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.172742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.172959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.172993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.730 [2024-11-20 10:44:11.173114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.730 [2024-11-20 10:44:11.173147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.730 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.173394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.173427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.173603] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.173635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.173760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.173793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.173961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.173995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.174109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.174142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.174351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.174386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.174646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.174680] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.174859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.174892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.175153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.175186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.175325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.175359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.175488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.175520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.175697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.175729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.175909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.175981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.176222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.176263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.176464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.176497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.176763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.176796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.176922] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.176955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.177140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.177174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.177327] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.177359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.177476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.177508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.177748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.177781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.177923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.177956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.178146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.178179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.178429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.178461] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.178727] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.178758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.178938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.178971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.731 [2024-11-20 10:44:11.179190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.731 [2024-11-20 10:44:11.179233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.731 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.179426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.179457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.756 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.179696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.179728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.756 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.179856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.179888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.756 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.180077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.180109] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.756 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.180289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.180324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.756 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.180566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.180597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.756 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.180772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.180803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.756 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.180980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.181013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.756 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.181211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.181243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.756 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.181374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.181408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.756 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.181648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.181679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.756 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.181862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.181895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.756 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.182078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.182116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.756 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.182304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.182338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.756 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.182509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.182540] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.756 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.182666] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.182698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.756 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.182819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.182850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.756 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.183039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.183071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.756 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.183254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.183287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.756 qpair failed and we were unable to recover it. 00:26:30.756 [2024-11-20 10:44:11.183421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.756 [2024-11-20 10:44:11.183455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.183598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.183631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.183874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.183905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.184027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.184059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.184171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.184212] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.184385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.184418] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.184524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.184554] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.184682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.184715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.184888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.184919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.185027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.185058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.185192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.185236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.185476] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.185509] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.185632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.185663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.185850] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.185882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.186000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.186032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.186221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.186257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.186457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.186490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.186695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.186726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.186900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.186930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.187165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.187196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.187394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.187426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.187703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.187736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.187932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.187965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.188099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.188132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.188253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.188288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.188409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.188443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.188623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.188656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.188888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.188921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.189103] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.189135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.189273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.189307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.189431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.189463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.189591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.189624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.189800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.189833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.189958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.189991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.190178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.190263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.190483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.190521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.190712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.190745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.757 qpair failed and we were unable to recover it. 00:26:30.757 [2024-11-20 10:44:11.190872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.757 [2024-11-20 10:44:11.190906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.191025] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.191058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.191261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.191298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.191570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.191604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.191786] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.191819] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.192007] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.192040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.192248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.192283] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.192407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.192440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.192638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.192670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.192783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.192816] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.192994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.193028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.193237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.193272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.193461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.193493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.193622] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.193655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.193830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.193863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.194045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.194086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.194245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.194288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.194467] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.194501] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.194614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.194648] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.194860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.194893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.195016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.195050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.195158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.195190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.195396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.195432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.195650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.195683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.195885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.195918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.196035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.196068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.196249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.196286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.196495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.196528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.196717] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.196751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.196862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.196894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.197135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.197168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.197299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.197337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.197469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.197503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.197681] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.197713] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.197956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.197990] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.758 qpair failed and we were unable to recover it. 00:26:30.758 [2024-11-20 10:44:11.198173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.758 [2024-11-20 10:44:11.198214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.198457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.198490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.198679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.198717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.198918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.198951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.199190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.199240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.199424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.199457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.199636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.199668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.199853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.199886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.200001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.200034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.200139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.200172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.200379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.200413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.200593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.200626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.200814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.200846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.201054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.201087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.201270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.201304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.201481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.201514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.201720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.201755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.201960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.201992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.202116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.202150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.202330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.202364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.202553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.202586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.202770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.202802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.202985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.203019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.203197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.203238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.203412] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.203445] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.203562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.203595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.203725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.203760] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.203946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.203978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.204220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.204255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.204496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.204529] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.204653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.204687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.204806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.204838] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.205033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.205066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.205251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.205286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.205461] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.205494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.759 qpair failed and we were unable to recover it. 00:26:30.759 [2024-11-20 10:44:11.205629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.759 [2024-11-20 10:44:11.205666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.207659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.207720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.208014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.208050] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.208306] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.208340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.208526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.208559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.208750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.208784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.209023] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.209057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.209274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.209309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.209553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.209587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.209719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.209754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.209939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.209970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.210104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.210138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.210261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.210296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.210418] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.210451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.210633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.210667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.210796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.210829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.211006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.211040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.211222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.211257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.211388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.211421] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.211548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.211580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.213458] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.213517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.213812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.213846] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.214058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.214092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.214334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.214369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.214555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.214588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.214722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.214755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.214930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.214962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.215247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.215282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.215473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.215507] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.215612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.215645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.215831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.215865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.215989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.216019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.216220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.216251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.216485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.216519] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.216712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.216745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.216930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.760 [2024-11-20 10:44:11.216969] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.760 qpair failed and we were unable to recover it. 00:26:30.760 [2024-11-20 10:44:11.217237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.217269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.217387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.217417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.217653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.217683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.217799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.217829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.217930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.217961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.218091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.218121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.218288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.218320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.218512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.218543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.218713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.218756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.218886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.218919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.219102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.219136] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.219313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.219346] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.219537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.219570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.219702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.219747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.219924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.219954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.220060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.220091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.220281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.220313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.220494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.220524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 3363716 Killed "${NVMF_APP[@]}" "$@" 00:26:30.761 [2024-11-20 10:44:11.220709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.220739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.220936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.220967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.221095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.221126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:26:30.761 [2024-11-20 10:44:11.221351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.221383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.221500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.221531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.221706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:26:30.761 [2024-11-20 10:44:11.221736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.221915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.221946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:30.761 [2024-11-20 10:44:11.222247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.222321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.761 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.222536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.222571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:30.761 [2024-11-20 10:44:11.222821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.222856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.223000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.761 [2024-11-20 10:44:11.223034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.761 qpair failed and we were unable to recover it. 00:26:30.761 [2024-11-20 10:44:11.223222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.223257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.223384] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.223419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.223659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.223692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.223878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.223912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.224031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.224065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.224185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.224228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.224437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.224471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.224646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.224679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.224945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.224987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.225177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.225218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.225334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.225367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.225549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.225582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.225707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.225740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.225956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.225988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.226190] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.226238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.226376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.226409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.226584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.226614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.226863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.226893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.227015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.227045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.227223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.227259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.227382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.227413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.227611] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.227644] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.227925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.227959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.228157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.228190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.228340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.228374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.228564] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.228598] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.228791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.228824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.228948] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.228980] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.229118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.229151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@328 -- # nvmfpid=3364438 00:26:30.762 [2024-11-20 10:44:11.229351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.229385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.229567] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.229601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@329 -- # waitforlisten 3364438 00:26:30.762 [2024-11-20 10:44:11.229796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:26:30.762 [2024-11-20 10:44:11.229830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.229951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.229983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@835 -- # '[' -z 3364438 ']' 00:26:30.762 [2024-11-20 10:44:11.230191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.762 [2024-11-20 10:44:11.230236] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.762 qpair failed and we were unable to recover it. 00:26:30.762 [2024-11-20 10:44:11.230360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.230393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.763 [2024-11-20 10:44:11.230575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.230609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:30.763 [2024-11-20 10:44:11.230738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.230771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.230960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.230992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.763 [2024-11-20 10:44:11.231212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.231246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:30.763 [2024-11-20 10:44:11.231376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.231410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:30.763 [2024-11-20 10:44:11.231594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.231627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.231833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.231866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.232107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.232141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.232331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.232364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.232479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.232510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.232613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.232646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.232762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.232795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.233035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.233067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.233241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.233275] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.233400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.233434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.233550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.233586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.233706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.233740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.233856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.233889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.234065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.234098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.234228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.234264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.234375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.234409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.234535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.234568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.234692] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.234726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.235012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.235046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.235182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.235228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.235356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.235390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.235585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.235618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.235803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.235835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.236011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.236044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.236157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.236190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.236325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.236359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.236483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.236515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.236689] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.236722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.763 [2024-11-20 10:44:11.236830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.763 [2024-11-20 10:44:11.236863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.763 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.236985] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.237017] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.237221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.237261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.237403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.237436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.237556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.237590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.237778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.237810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.237923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.237959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.238224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.238258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.238470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.238505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.238678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.238712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.238825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.238857] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.238980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.239014] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.239243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.239281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.239407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.239439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.239547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.239580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.239818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.239850] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.239969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.240002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.240177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.240219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.240348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.240381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.240586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.240619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.240724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.240756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.240891] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.240925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.241029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.241061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.241167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.241200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.241416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.241450] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.241587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.241620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.241726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.241759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.241871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.241904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.242112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.242144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.242336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.242370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.242487] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.242520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.242630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.242663] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.242788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.242821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.242937] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.242970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.243159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.764 [2024-11-20 10:44:11.243191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.764 qpair failed and we were unable to recover it. 00:26:30.764 [2024-11-20 10:44:11.243348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.243382] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.243525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.243558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.243733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.243766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.243879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.243912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.244021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.244057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.244179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.244223] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.244415] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.244447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.244549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.244588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.244716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.244750] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.244877] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.244912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.245016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.245049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.245167] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.245200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.245404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.245437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.245549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.245583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.245702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.245735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.245843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.245876] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.246056] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.246089] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.246226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.246262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.246441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.246473] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.246650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.246683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.246807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.246840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.247053] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.247086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.247220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.247250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.247357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.247387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.247491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.247521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.247632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.247662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.247845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.247873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.248061] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.248091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.248273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.248304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.248489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.248520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.248640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.248670] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.248770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.248800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.248896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.248927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.249092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.249124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.249248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.249280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.765 [2024-11-20 10:44:11.249396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.765 [2024-11-20 10:44:11.249426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.765 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.249595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.249624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.249856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.249888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.250058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.250088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.250307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.250338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.250441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.250471] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.250578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.250607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.250821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.250851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.250963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.250992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.251096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.251126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.251237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.251276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.251396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.251426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.251590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.251625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.251734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.251771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.251946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.251976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.252077] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.252107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.252273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.252310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.252491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.252520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.252687] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.252717] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.252832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.252862] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.252967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.252997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.253124] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.253154] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.253335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.253366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.253548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.253579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.253748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.253778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.253897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.253928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.254046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.254076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.254187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.254247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.254430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.254463] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.254638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.254669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.254824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.766 [2024-11-20 10:44:11.254855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.766 qpair failed and we were unable to recover it. 00:26:30.766 [2024-11-20 10:44:11.255042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.255072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.255304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.255336] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.255506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.255537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.255653] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.255684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.255799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.255829] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.255946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.255978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.256215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.256247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.256387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.256417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.256531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.256561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.256698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.256728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.256856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.256886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.256998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.257028] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.257194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.257234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.257400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.257429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.257621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.257653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.257782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.257812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.257926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.257956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.258068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.258099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.258280] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.258312] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.258423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.258455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.258654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.258684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.258806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.258842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.259022] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.259052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.259229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.259262] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.259374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.259404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.259591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.259624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.259726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.259756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.259863] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.259893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.260083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.260114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.260226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.260258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.260441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.260470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.260571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.767 [2024-11-20 10:44:11.260601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.767 qpair failed and we were unable to recover it. 00:26:30.767 [2024-11-20 10:44:11.260710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.260740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.260902] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.260933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.261111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.261141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.261282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.261314] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.261419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.261448] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.261548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.261579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.261748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.261778] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.261947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.261977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.262145] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.262174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.262344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.262417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.262565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.262602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.262801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.262835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.262968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.263001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.263115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.263151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.263288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.263320] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.263449] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.263481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.263721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.263754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.263865] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.263898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.264002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.264035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.264150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.264183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.264299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.264331] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.264526] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.264561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.264741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.264773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.264965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.264998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.265176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.265217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.265396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.265427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.265539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.265571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.265824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.265856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.265968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.265997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.266113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.266149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.266265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.266296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.266399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.266430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.266536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.266566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.266671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.266702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.266870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.768 [2024-11-20 10:44:11.266899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.768 qpair failed and we were unable to recover it. 00:26:30.768 [2024-11-20 10:44:11.267086] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.267116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.267246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.267280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.267398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.267431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.267546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.267578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.267704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.267737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.267847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.267880] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.267993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.268026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.268138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.268170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.268308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.268341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.268536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.268569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.268679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.268711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.268900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.268933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.269038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.269070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.269193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.269257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.269362] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.269392] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.269562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.269593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.269706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.269738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.269853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.269886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.270012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.270043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.270158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.270191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.270382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.270414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.270528] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.270561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.270739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.270772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.270961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.270993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.271187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.271229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.271341] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.271373] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.271480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.271513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.271686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.271718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.271956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.271989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.272188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.272231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.272408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.272441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.272569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.272601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.272726] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.272759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.272878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.769 [2024-11-20 10:44:11.272910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.769 qpair failed and we were unable to recover it. 00:26:30.769 [2024-11-20 10:44:11.273020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.273058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.273179] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.273220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.273344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.273377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.273481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.273513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.273713] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.273746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.273862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.273896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.274066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.274098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.274217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.274251] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.274382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.274415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.274612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.274645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.274781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.274813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.274923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.274957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.275078] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.275110] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.275293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.275328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.275510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.275543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.275753] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.275786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.275917] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.275950] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.276170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.276213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.276390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.276423] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.276530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.276562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.276773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.276805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.276927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.276960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.277066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.277100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.277340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.277374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.277489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.277522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.277696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.277729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.277843] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.277875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.277995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.278033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.278142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.278174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.278301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.278334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.278533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.278567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.278685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.278718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.278829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.278861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.278969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.279002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.279131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.279164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.279411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.770 [2024-11-20 10:44:11.279446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.770 qpair failed and we were unable to recover it. 00:26:30.770 [2024-11-20 10:44:11.279563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.279595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.279720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.279751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.279884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.279917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.280027] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:26:30.771 [2024-11-20 10:44:11.280047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.280072] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:30.771 [2024-11-20 10:44:11.280081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.280187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.280229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.280331] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.280361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.280535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.280564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.280668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.280698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.280871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.280902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.281087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.281118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.281246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.281278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.281402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.281433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.281540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.281571] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.281695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.281726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.281836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.281867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.282043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.282077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.282220] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.282255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.282376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.282409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.282525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.282558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.282797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.282831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.282954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.282986] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.283088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.283121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.283229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.283263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.283451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.283484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.283673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.283707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.283813] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.283843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.283962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.283996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.284126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.284159] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.771 qpair failed and we were unable to recover it. 00:26:30.771 [2024-11-20 10:44:11.284281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.771 [2024-11-20 10:44:11.284315] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.284456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.284489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.284602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.284634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.284804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.284837] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.284954] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.284987] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.285100] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.285131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.285329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.285364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.285479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.285511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.285721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.285754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.285884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.285917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.286104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.286138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.286245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.286279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.286473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.286506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.286639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.286672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.286781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.286813] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.286951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.286991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.287109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.287141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.287339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.287374] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.287554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.287587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.287791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.287823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.288000] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.288033] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.288156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.288190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.288321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.288354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.288480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.288513] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.288632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.288664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.288907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.288940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.289046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.289079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.289193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.289237] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.289354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.289387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.289575] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.289608] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.289876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.289909] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.290033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.290065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.290237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.290270] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.290475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.290508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.290619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.290651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.290755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.290789] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.772 [2024-11-20 10:44:11.290900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.772 [2024-11-20 10:44:11.290932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.772 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.291046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.291079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.291200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.291240] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.291417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.291449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.291551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.291583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.291695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.291728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.291908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.291940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.292111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.292144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.292323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.292358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.292474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.292506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.292623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.292656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.292779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.292811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.292942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.292974] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.293147] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.293179] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.293382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.293416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.293519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.293552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.293683] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.293715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.293885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.293919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.294050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.294084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.294200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.294263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.294371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.294404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.294593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.294626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.294797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.294830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.294947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.294979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.295093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.295124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.295247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.295281] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.295484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.295517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.295627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.295661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.295765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.295799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.295908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.295941] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.296065] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.296099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.296212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.296246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.296360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.296393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.296587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.296621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.773 [2024-11-20 10:44:11.296806] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.773 [2024-11-20 10:44:11.296839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.773 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.296970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.297002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.297186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.297226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.297343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.297376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.297478] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.297511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.297703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.297735] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.297845] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.297878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.298006] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.298039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.298219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.298252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.298393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.298426] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.298548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.298581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.298761] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.298794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.298905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.298938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.299046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.299079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.299253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.299287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.299472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.299504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.299623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.299657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.299791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.299823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.300001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.300034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.300221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.300254] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.300366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.300399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.300529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.300562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.300675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.300708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.300884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.300916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.301093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.301126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.301247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.301287] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.301391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.301425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.301550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.301582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.301769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.301802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.301906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.301939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.302052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.302083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.302186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.302228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.302353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.302385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.302561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.302593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.302701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.302734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.302846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.302879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.774 [2024-11-20 10:44:11.302991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.774 [2024-11-20 10:44:11.303025] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.774 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.303211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.303245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.303357] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.303389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.303574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.303607] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.303716] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.303748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.303861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.303896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.304014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.304046] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.304234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.304268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.304406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.304440] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.304631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.304664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.304773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.304806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.304926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.304959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.305091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.305124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.305251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.305285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.305579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.305611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.305789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.305821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.305949] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.305983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.306187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.306227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.306352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.306384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.306558] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.306591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.306710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.306742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.306921] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.306954] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.307060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.307093] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.307298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.307332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.307464] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.307496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.307684] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.307716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.307893] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.307925] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.308102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.308146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.308259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.308290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.308399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.308433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.308535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.308565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.308665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.308695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.308882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.308912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.309095] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.309125] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.309244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.309276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.775 [2024-11-20 10:44:11.309376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.775 [2024-11-20 10:44:11.309406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.775 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.309581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.309611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.309723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.309753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.309872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.309901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.310088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.310118] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.310238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.310269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.310386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.310416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.310529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.310559] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.310677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.310707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.310881] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.310912] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.311026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.311057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.311160] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.311190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.311434] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.311465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.311638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.311668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.311780] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.311809] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.311975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.312005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.312107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.312137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.312261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.312292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.312400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.312431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.312547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.312577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.312697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.312727] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.312857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.312888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.313012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.313043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.313148] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.313177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.313299] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.313330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.313460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.313490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.313657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.313687] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.313804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.313833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.313931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.313962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.314152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.314182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.314381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.314412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.776 [2024-11-20 10:44:11.314601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.776 [2024-11-20 10:44:11.314631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.776 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.314741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.314770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.314943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.314973] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.315087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.315121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.315294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.315327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.315627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.315657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.315835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.315864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.315961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.315991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.316157] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.316186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.316304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.316334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.316504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.316534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.316638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.316668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.316773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.316803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.316901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.316929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.317041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.317072] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.317169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.317199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.317411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.317441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.317568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.317597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.317697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.317726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.317899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.317928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.318075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.318104] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.318212] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.318242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.318346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.318376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.318493] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.318522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.318630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.318659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.318762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.318792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.318978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.319008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.319107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.319137] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.319318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.319351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.319486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.319517] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.319630] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.319660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.319781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.319811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.319955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.319985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.320240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.320271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.320447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.777 [2024-11-20 10:44:11.320477] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.777 qpair failed and we were unable to recover it. 00:26:30.777 [2024-11-20 10:44:11.320583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.320612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.320723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.320753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.320852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.320881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.321050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.321080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.321194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.321231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.321339] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.321369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.321546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.321575] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.321757] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.321788] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.321962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.321996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.322115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.322144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.322273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.322304] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.322430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.322459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.322561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.322592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.322695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.322725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.322831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.322861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.322970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.322999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.323102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.323131] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.323309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.323341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.323452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.323482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.323581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.323611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.323711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.323742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.323910] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.323938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.324041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.324071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.324265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.324296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.324409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.324439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.324547] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.324577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.324679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.324709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.324811] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.324840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.325062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.325092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.325262] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.325293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.325397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.325428] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.325601] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.325631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.325818] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.325849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.325971] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.326000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.326101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.326132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.326248] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.326279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.326399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.326429] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.326559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.778 [2024-11-20 10:44:11.326588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.778 qpair failed and we were unable to recover it. 00:26:30.778 [2024-11-20 10:44:11.326776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.326807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.326918] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.326947] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.327076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.327106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.327217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.327248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.327370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.327400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.327535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.327565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.327660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.327689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.327805] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.327834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.327933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.327962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.328067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.328097] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.328211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.328252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.328374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.328405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.328594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.328623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.328744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.328775] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.328892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.328921] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.329026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.329055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.329165] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.329195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.329439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.329469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.329634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.329664] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.329830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.329860] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.330038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.330067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.330243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.330274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.330442] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.330472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.330646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.330676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.330866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.330897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.331011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.331040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.331146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.331176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.331392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.331437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.331560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.331594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.331707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.331738] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.331924] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.331955] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.332076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.332108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.332230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.332264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.332455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.332487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.332670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.332703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.332808] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.332840] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.333014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.333048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.779 [2024-11-20 10:44:11.333273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.779 [2024-11-20 10:44:11.333345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.779 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.333590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.333666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.333860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.333896] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.334018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.334052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.334228] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.334259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.334370] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.334401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.334520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.334552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.334739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.334771] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.334884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.334917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.335180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.335219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.335329] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.335358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.335484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.335515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.335626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.335655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.335782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.335818] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.335993] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.336023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.336191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.336229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.336330] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.336359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.336540] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.336570] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.336673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.336702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.336812] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.336842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.337010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.337040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.337153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.337182] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.337396] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.337427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.337568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.337597] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.337769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.337799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.337912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.337942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.338108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.338138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.338261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.338308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.338485] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.338518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.338700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.338733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.338909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.338942] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.339171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.339217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.339343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.339375] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.339488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.339521] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.339629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.339661] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.339776] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.339808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.780 [2024-11-20 10:44:11.339930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.780 [2024-11-20 10:44:11.339963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.780 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.340181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.340221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.340410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.340443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.340548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.340580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.340769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.340810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.340989] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.341023] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.341158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.341190] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.341324] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.341358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.341541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.341576] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.341814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.341847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.341983] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.342015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.342150] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.342183] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.342309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.342342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.342519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.342551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.342686] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.342718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.342897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.342930] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.343054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.343087] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.343222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.343266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.343465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.343498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.343609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.343642] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.343822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.343854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.343974] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.344005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.344242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.344277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.344392] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.344425] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.344598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.344631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.344762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.344794] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.344967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.345000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.345117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.345150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.345269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.345302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.781 [2024-11-20 10:44:11.345410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.781 [2024-11-20 10:44:11.345443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.781 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.345560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.345592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.345712] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.345745] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.345923] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.345957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.346138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.346169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.346308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.346342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.346454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.346486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.346594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.346626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.346866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.346899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.347021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.347054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.347159] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.347191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.347317] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.347350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.347466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.347497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.347710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.347743] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.347963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.347995] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.348199] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.348267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.348399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.348435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.348553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.348586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.348696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.348731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.348969] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.349003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.349177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.349222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.349400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.349433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.349607] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.349640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.349816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.349849] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.350035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.350068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.350217] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.350255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.350497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.350531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.350730] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.350764] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.351005] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.351039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.351156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.351189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.351441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.351476] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.351690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.351723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.351847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.351879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.352049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.352083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.782 qpair failed and we were unable to recover it. 00:26:30.782 [2024-11-20 10:44:11.352223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.782 [2024-11-20 10:44:11.352257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.352441] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.352475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.352657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.352693] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.352801] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.352833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.353048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.353081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.353267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.353302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.353425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.353459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.353566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.353600] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.353793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.353833] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.353955] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.353988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.354113] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.354147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.354332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.354366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.354473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.354506] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.354616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.354650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.354771] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.354805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.354982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.355016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.355196] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.355249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.355445] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.355478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.355594] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.355627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.355785] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.355817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.355933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.355966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.783 qpair failed and we were unable to recover it. 00:26:30.783 [2024-11-20 10:44:11.356251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.783 [2024-11-20 10:44:11.356285] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.356475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.356508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.356620] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.356652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.356839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.356871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.357011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.357044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.357257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.357293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.357472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.357505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.357626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.357658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.357779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.357811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.358009] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.358040] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.358156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.358188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.358309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.358342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.358452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.358483] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.358612] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.358645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.358755] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.358793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.358929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.358961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.359066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.359098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.359226] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.359261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.359376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.359408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.359511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.359543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.359714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.359747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.359866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.359898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.360088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.360121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.360230] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.360264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.360375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.784 [2024-11-20 10:44:11.360407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.784 qpair failed and we were unable to recover it. 00:26:30.784 [2024-11-20 10:44:11.360519] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.360552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.360676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.360709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.360892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.360919] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:26:30.785 [2024-11-20 10:44:11.360932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.361055] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.361086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.361197] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.361238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.361349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.361381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.361561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.361595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.361709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.361742] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.361872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.361905] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.362034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.362067] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.362193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.362248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.362361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.362394] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.362498] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.362531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.362639] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.362671] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.362775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.362807] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.362980] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.363013] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.363123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.363156] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.363367] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.363401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.363579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.363611] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.363783] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.363817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.363939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.363971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.364091] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.364124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.364237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.364271] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.364448] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.364482] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.364598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.364630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.364870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.364902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.785 [2024-11-20 10:44:11.365008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.785 [2024-11-20 10:44:11.365041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.785 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.365163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.365197] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.365316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.365349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.365466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.365500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.365624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.365657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.365762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.365795] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.365901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.365933] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.366108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.366141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.366328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.366362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.366482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.366516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.366704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.366739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.366927] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.366960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.367063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.367095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.367304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.367339] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.367518] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.367551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.367665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.367698] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.367868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.367906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.368021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.368055] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.368237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.368274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.368401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.368434] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.368563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.368596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.368789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.368824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.368943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.368976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.369087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.369121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.369234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.369268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.369379] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.369412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.369544] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.369578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.369703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.369736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.786 qpair failed and we were unable to recover it. 00:26:30.786 [2024-11-20 10:44:11.369932] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.786 [2024-11-20 10:44:11.369966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.787 [2024-11-20 10:44:11.370141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.787 [2024-11-20 10:44:11.370176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.787 [2024-11-20 10:44:11.370319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.787 [2024-11-20 10:44:11.370355] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.787 [2024-11-20 10:44:11.370474] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.787 [2024-11-20 10:44:11.370508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.787 [2024-11-20 10:44:11.370688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.787 [2024-11-20 10:44:11.370723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.787 [2024-11-20 10:44:11.370832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.787 [2024-11-20 10:44:11.370867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.787 [2024-11-20 10:44:11.370996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.787 [2024-11-20 10:44:11.371029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.787 [2024-11-20 10:44:11.371133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.787 [2024-11-20 10:44:11.371166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.787 [2024-11-20 10:44:11.371286] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.787 [2024-11-20 10:44:11.371321] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.787 [2024-11-20 10:44:11.371451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.787 [2024-11-20 10:44:11.371484] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.787 [2024-11-20 10:44:11.371671] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.787 [2024-11-20 10:44:11.371705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.787 [2024-11-20 10:44:11.371809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.787 [2024-11-20 10:44:11.371843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.787 [2024-11-20 10:44:11.371967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.787 [2024-11-20 10:44:11.372000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.787 [2024-11-20 10:44:11.372119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.787 [2024-11-20 10:44:11.372153] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.787 [2024-11-20 10:44:11.372366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.787 [2024-11-20 10:44:11.372401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.787 [2024-11-20 10:44:11.372527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.787 [2024-11-20 10:44:11.372560] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.787 [2024-11-20 10:44:11.372670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.787 [2024-11-20 10:44:11.372704] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.787 [2024-11-20 10:44:11.372908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.787 [2024-11-20 10:44:11.372940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.787 [2024-11-20 10:44:11.373131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.787 [2024-11-20 10:44:11.373164] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.787 [2024-11-20 10:44:11.373354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.787 [2024-11-20 10:44:11.373388] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.787 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.373589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.373621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.373752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.373785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.373926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.373959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.374137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.374170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.374296] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.374330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.374499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.374533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.374673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.374705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.374816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.374847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.374976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.375016] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.375141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.375175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.375374] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.375408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.375587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.375620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.375729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.375762] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.375895] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.375928] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.376111] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.376143] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.376273] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.376307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.376413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.376447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.376551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.376583] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.376690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.376723] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.376834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.376866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.377102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.377133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.377319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.377353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.377560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.377593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.377708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.377741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.377873] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.377904] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.378030] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.378062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.378239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.788 [2024-11-20 10:44:11.378274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.788 qpair failed and we were unable to recover it. 00:26:30.788 [2024-11-20 10:44:11.378517] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.378550] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.378662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.378694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.378869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.378901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.379104] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.379138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.379245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.379279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.379398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.379430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.379596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.379630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.379763] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.379796] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.379925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.379958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.380074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.380106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.380231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.380265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.380373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.380405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.380505] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.380538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.380647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.380679] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.380790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.380821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.380945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.380977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.381138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.381171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.381315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.381360] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.381647] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.381681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.381862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.381895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.382013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.382045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.382161] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.382200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.382482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.382514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.382721] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.382754] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.382944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.382976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.383101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.383134] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.383267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.383302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.383410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.383443] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.383568] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.383601] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.383837] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.383870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.383984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.384018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.384126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.384160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.384365] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.384399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.384581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.384614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.384803] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.384836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.384950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.384983] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.385164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.385198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.385447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.385480] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.385595] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.385628] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.385836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.385868] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.385991] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.386024] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.386146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.386178] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.386366] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.789 [2024-11-20 10:44:11.386399] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.789 qpair failed and we were unable to recover it. 00:26:30.789 [2024-11-20 10:44:11.386509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.386544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.386735] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.386768] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.386946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.386978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.387093] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.387126] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.387245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.387280] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.387466] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.387498] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.387685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.387718] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.387842] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.387875] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.387982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.388015] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.388122] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.388155] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.388335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.388370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.388488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.388520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.388627] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.388660] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.388782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.388815] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.388926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.388959] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.389072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.389106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.389291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.389330] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.389543] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.389577] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.389754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.389793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.389998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.390031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.390223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.390258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.390386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.390419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.390608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.390641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.390817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.390851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.390979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.391012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.391195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.391242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.391353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.391386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.391494] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.391528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.391711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.391744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.391933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.391966] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.392136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.392169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.392307] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.392345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.392585] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.392635] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.392788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.392822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.392945] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.392979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.393097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.393129] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.393305] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.393341] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.393554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.393587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.393779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.393812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.393931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.393964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.394164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.394196] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.790 qpair failed and we were unable to recover it. 00:26:30.790 [2024-11-20 10:44:11.394337] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.790 [2024-11-20 10:44:11.394369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.394488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.394520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.394634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.394666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.394788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.394821] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.395026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.395060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.395181] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.395225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.395334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.395366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.395556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.395589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.395700] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.395732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.395913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.395946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.396066] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.396099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.396281] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.396317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.396500] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.396532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.396644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.396677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.396788] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.396820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.397008] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.397039] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.397156] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.397187] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.397387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.397427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.397537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.397569] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.397674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.397706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.397831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.397864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.397970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.398002] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.398183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.398224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.398352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.398385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.398562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.398596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.398702] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.398734] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.398851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.398884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.399070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.399103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.399276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.399309] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.399429] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.399462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.399570] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.399603] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.399789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.399823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.399931] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.399964] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.400164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.400199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.400407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.400444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.400552] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.400585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.400695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.400728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.400906] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.400940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.791 qpair failed and we were unable to recover it. 00:26:30.791 [2024-11-20 10:44:11.401133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.791 [2024-11-20 10:44:11.401167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.401321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.401356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.401532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.401565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.401807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.401841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.401961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.401994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.402109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.402142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.402276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.402311] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.402503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.402536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.402722] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.402756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.402846] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.792 [2024-11-20 10:44:11.402875] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:30.792 [2024-11-20 10:44:11.402882] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:30.792 [2024-11-20 10:44:11.402875] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.402891] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:30.792 [2024-11-20 10:44:11.402897] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.792 [2024-11-20 10:44:11.402906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.403011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.403041] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.403144] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.403174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.403304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.403342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.403471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.403502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.403616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.403647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.403822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.403855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.403966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.403999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.404133] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.404173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.404312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.404347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.404459] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.404493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 [2024-11-20 10:44:11.404403] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.404493] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:26:30.792 [2024-11-20 10:44:11.404602] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:26:30.792 [2024-11-20 10:44:11.404675] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.404603] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:26:30.792 [2024-11-20 10:44:11.404706] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.404820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.404851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.404962] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.404993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.405115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.405148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.405268] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.405302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.405423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.405456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.405562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.405595] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.405767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.405801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.405973] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.406007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.406180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.406233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.406432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.406465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.406670] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.406703] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.406890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.406924] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.407052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.407086] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.407210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.407246] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.792 qpair failed and we were unable to recover it. 00:26:30.792 [2024-11-20 10:44:11.407451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.792 [2024-11-20 10:44:11.407485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.407613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.407647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.407767] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.407800] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.407925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.407958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.408082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.408116] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.408229] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.408263] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.408455] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.408488] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.408665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.408699] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.408828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.408861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.409041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.409074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.409210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.409244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.409356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.409389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.409559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.409593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.409707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.409741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.409851] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.409884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.410062] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.410095] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.410222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.410257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.410376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.410409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.410533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.410566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.410679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.410712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.410976] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.411009] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.411139] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.411184] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.411309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.411343] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.411525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.411557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.411706] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.411739] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.411849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.411882] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.411994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.412027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.412154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.412188] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.412402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.412435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.412605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.412638] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.412743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.412777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.412907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.793 [2024-11-20 10:44:11.412940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.793 qpair failed and we were unable to recover it. 00:26:30.793 [2024-11-20 10:44:11.413058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.413092] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.413265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.413300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.413420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.413454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.413766] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.413799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.413925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.413957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.414130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.414163] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.414346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.414381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.414589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.414623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.414760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.414793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.414963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.414996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.415101] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.415135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.415265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.415299] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.415416] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.415449] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.415553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.415586] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.415778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.415811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.416011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.416045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.416292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.416328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.416456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.416490] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.416619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.416653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.416769] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.416803] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.416925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.416958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.417131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.417165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.417377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.417414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.417524] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.417558] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.417743] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.417776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.417966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.418001] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.418106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.418140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.418275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.418310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.418433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.418466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.418650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.418692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.418810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.418844] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.419048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.419081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.419323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.794 [2024-11-20 10:44:11.419359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.794 qpair failed and we were unable to recover it. 00:26:30.794 [2024-11-20 10:44:11.419555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.419589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.419698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.419731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.419852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.419886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.419998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.420031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.420225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.420261] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.420439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.420472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.420650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.420685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.420820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.420855] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.420986] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.421019] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.421141] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.421176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.421387] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.421422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.421597] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.421631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.421760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.421793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.421928] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.421962] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.422072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.422105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.422243] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.422276] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.422382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.422416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.422661] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.422695] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.422814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.422847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.422959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.422992] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.423108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.423141] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.423333] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.423368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.423499] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.423532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.423725] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.423758] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.423936] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.423970] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.424090] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.424123] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.424255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.424290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.424472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.424505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.424678] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.424711] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.424819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.424852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.424970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.425004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.425178] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.425218] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:30.795 [2024-11-20 10:44:11.425336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:30.795 [2024-11-20 10:44:11.425369] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:30.795 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-20 10:44:11.425592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-20 10:44:11.425625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-20 10:44:11.425738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-20 10:44:11.425772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-20 10:44:11.425884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-20 10:44:11.425917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-20 10:44:11.426034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-20 10:44:11.426075] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-20 10:44:11.426200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-20 10:44:11.426242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-20 10:44:11.426411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-20 10:44:11.426444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-20 10:44:11.426617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-20 10:44:11.426651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-20 10:44:11.426836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-20 10:44:11.426870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-20 10:44:11.427043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-20 10:44:11.427078] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-20 10:44:11.427325] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-20 10:44:11.427359] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-20 10:44:11.427591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-20 10:44:11.427625] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-20 10:44:11.427758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-20 10:44:11.427792] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-20 10:44:11.427988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-20 10:44:11.428021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-20 10:44:11.428132] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-20 10:44:11.428165] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-20 10:44:11.428284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-20 10:44:11.428319] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.079 [2024-11-20 10:44:11.428435] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.079 [2024-11-20 10:44:11.428469] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.079 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.428674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.428708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.428897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.428931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.429058] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.429091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.429244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.429279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.429454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.429487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.429608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.429641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.429810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.429843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.429960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.429994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.430168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.430214] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.430361] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.430395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.430533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.430566] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.430738] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.430770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.430979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.431012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.431134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.431167] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.431395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.431452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.431641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.431674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.431963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.431997] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.432114] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.432148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.432292] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.432326] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.432512] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.432545] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.432673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.432705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.432821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.432854] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.432964] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.432996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.433264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.433298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.433475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.433508] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.433641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.433675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.433866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.433898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.434071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.434114] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.434240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.434274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.434383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.434417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.434548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.434582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.434695] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.434729] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.434852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.434886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.435063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.435096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.435224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.435258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.435451] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.435486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.435613] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.435646] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.080 qpair failed and we were unable to recover it. 00:26:31.080 [2024-11-20 10:44:11.435828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.080 [2024-11-20 10:44:11.435863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.436001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.436035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.436216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.436252] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.436447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.436486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.436674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.436709] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.436853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.436888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.437071] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.437106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.437238] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.437277] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.437462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.437497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.437679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.437712] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.437836] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.437870] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.437994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.438027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.438136] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.438172] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.438323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.438386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.438608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.438666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.438796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.438832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.439024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.439059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.439186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.439235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.439353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.439387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.439618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.439652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.439824] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.439856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.439984] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.440018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.440237] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.440272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.440403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.440436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.440545] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.440579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.440707] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.440740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.440857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.440889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.440999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.441032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.441216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.441250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.441358] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.441390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.441504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.441538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.441658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.441692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.441802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.441835] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.441951] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.441984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.442109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.442142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.442276] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.442310] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.442413] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.442446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.442554] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.081 [2024-11-20 10:44:11.442588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.081 qpair failed and we were unable to recover it. 00:26:31.081 [2024-11-20 10:44:11.442708] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.442741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.442861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.442895] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.443010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.443048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.443187] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.443232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.443409] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.443444] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.443557] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.443591] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.443711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.443748] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.443860] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.443891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.443995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.444029] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.444138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.444171] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.444313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.444361] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.444484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.444522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.444629] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.444662] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.444790] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.444823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.444933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.444967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.445074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.445107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.445221] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.445257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.445364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.445398] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.445531] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.445564] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.445673] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.445705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.445829] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.445863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.445987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.446021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.446135] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.446168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.446288] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.446322] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.446431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.446464] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.446587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.446621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.446810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.446843] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.447020] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.447054] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.447182] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.447225] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.447406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.447439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.447621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.447656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.447768] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.447801] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.447915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.447948] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.448051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.448091] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.448271] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.448307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.448482] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.448515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.448632] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.448666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.448778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.082 [2024-11-20 10:44:11.448811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.082 qpair failed and we were unable to recover it. 00:26:31.082 [2024-11-20 10:44:11.448987] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.449021] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.449200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.449245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.449432] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.449466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.449579] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.449612] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.449718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.449751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.449944] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.449979] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.450097] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.450132] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.450313] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.450349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.450520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.450555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.450668] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.450702] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.450828] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.450861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.451038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.451071] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.451186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.451230] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.451349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.451383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.451504] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.451537] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.451648] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.451681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.451800] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.451834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.452074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.452106] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.452231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.452266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.452373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.452407] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.452596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.452629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.452733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.452767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.452883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.452922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.453032] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.453065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.453183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.453227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.453402] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.453436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.453555] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.453588] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.453697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.453731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.453909] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.453944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.454047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.454079] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.454256] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.454291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.454404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.454438] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.454559] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.454593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.454719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.454753] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.454942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.454977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.455087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.455121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.455242] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.455293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.455421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.083 [2024-11-20 10:44:11.455455] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.083 qpair failed and we were unable to recover it. 00:26:31.083 [2024-11-20 10:44:11.455580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.455613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.455723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.455759] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.455956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.455991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.456117] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.456149] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.456340] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.456377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.456571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.456605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.456733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.456767] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.456957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.456991] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.457126] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.457161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.457347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.457383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.457561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.457593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.457775] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.457817] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.458024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.458058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.458235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.458269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.458393] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.458427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.458548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.458580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.458704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.458737] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.458856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.458889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.459012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.459044] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.459252] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.459289] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.459475] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.459511] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.459635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.459668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.459789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.459823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.459938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.459972] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.460176] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.460219] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.460417] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.460451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.460574] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.460606] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.460729] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.460763] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.460868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.460900] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.461088] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.461121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.461321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.461358] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.461548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.461580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.461693] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.084 [2024-11-20 10:44:11.461725] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.084 qpair failed and we were unable to recover it. 00:26:31.084 [2024-11-20 10:44:11.461862] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.461893] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.462026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.462059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.462173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.462232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.462408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.462441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.462556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.462589] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.462784] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.462831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.462970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.463010] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.463128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.463168] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.463422] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.463457] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.463635] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.463668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.463853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.463886] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.464123] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.464157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.464343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.464377] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.464488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.464522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.464636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.464669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.464849] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.464881] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.465063] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.465096] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.465346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.465381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.465486] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.465518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.465633] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.465667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.465789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.465823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.465942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.465975] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.466099] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.466135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.466320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.466354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.466561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.466594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.466781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.466814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.466943] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.466976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.467166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.467199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.467323] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.467356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.467484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.467516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.467711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.467744] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.467933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.467967] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.468106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.468145] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.468282] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.468318] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.468460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.468491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.468672] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.468705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.468901] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.468934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.469068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.085 [2024-11-20 10:44:11.469102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.085 qpair failed and we were unable to recover it. 00:26:31.085 [2024-11-20 10:44:11.469344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.469378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.469584] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.469626] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.469839] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.469872] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.470052] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.470084] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.470223] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.470256] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.470373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.470405] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.470520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.470552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.470746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.470785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.470899] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.470931] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.471046] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.471085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.471215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.471250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.471373] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.471406] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.471591] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.471624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.471809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.471842] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.472047] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.472080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.472192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.472232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.472433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.472466] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.472641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.472675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.472814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.472847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.473041] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.473074] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.473215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.473249] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.473364] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.473397] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.473523] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.473556] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.473728] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.473761] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.473889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.473922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.474094] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.474128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.474247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.474282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.474454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.474487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.474617] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.474650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.474930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.474963] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.475158] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.475191] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.475375] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.475409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.475587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.475619] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.475752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.475785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.475968] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.476005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.476138] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.476176] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.476309] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.476345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.476532] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.086 [2024-11-20 10:44:11.476565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.086 qpair failed and we were unable to recover it. 00:26:31.086 [2024-11-20 10:44:11.476739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.476770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.476942] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.476976] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.477082] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.477113] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.477302] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.477338] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.477481] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.477514] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.477698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.477731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.477847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.477879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.478002] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.478035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.478168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.478200] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.478319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.478356] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.478596] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.478629] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.478750] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.478782] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.478903] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.478935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.479051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.479082] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.479193] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.479238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.479347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.479380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.479501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.479533] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.479634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.479667] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.479856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.479890] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.480064] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.480098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.480211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.480245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.480424] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.480456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.480636] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.480668] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.480959] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.480993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.481244] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.481278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.481491] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.481524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.481665] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.481697] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.481870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.481901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.482017] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.482049] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.482166] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.482198] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.482319] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.482351] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.482546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.482579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.482709] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.482740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.482855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.482887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.483029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.483061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.483234] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.483267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.483406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.483446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.087 [2024-11-20 10:44:11.483626] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.087 [2024-11-20 10:44:11.483659] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.087 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.483773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.483805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.484075] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.484107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.484293] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.484328] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.484514] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.484547] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.484662] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.484694] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.484897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.484932] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.485152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.485186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.485397] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.485430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.485621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.485652] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.485911] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.485946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.486188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.486235] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.486355] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.486395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.486577] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.486609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.486789] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.486823] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.487067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.487100] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.487277] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.487313] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.487502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.487534] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.487720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.487752] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.487868] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.487901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.488140] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.488173] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.488463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.488510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.488631] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.488665] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.488900] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.488934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.489174] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.489220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.489347] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.489380] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.489587] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.489621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.489814] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.489847] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.489978] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.490011] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.490198] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.490245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.490502] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.490536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.490779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.490812] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.491027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.491061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.491180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.088 [2024-11-20 10:44:11.491226] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.088 qpair failed and we were unable to recover it. 00:26:31.088 [2024-11-20 10:44:11.491401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.491433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.491563] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.491596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.491723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.491756] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.491963] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.491996] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.492173] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.492217] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.492336] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.492370] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.492489] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.492523] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.492697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.492731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.492919] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.492953] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.493068] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.493101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.493376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.493410] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.493608] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.493641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.493781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.493814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.493925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.493958] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.494153] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.494189] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.494334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.494367] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.494539] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.494572] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.494697] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.494730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.494841] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.494873] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.495015] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.495062] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.495257] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.495294] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.495408] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.495441] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.495551] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.495582] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.495752] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.495785] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:31.089 [2024-11-20 10:44:11.495966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.496000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.496209] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.496243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@868 -- # return 0 00:26:31.089 [2024-11-20 10:44:11.496437] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.496470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.496598] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.496630] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:31.089 [2024-11-20 10:44:11.496816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.496848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:31.089 [2024-11-20 10:44:11.497028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.497061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.497189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.497239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b9 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.089 0 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.497376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.497408] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.497546] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.497578] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.497765] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.497799] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.497913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.497951] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.089 qpair failed and we were unable to recover it. 00:26:31.089 [2024-11-20 10:44:11.498070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.089 [2024-11-20 10:44:11.498099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.498215] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.498247] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.498360] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.498389] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.498566] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.498599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.498699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.498730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.498907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.498938] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.499177] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.499221] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.499426] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.499459] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.499578] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.499609] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.499804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.499834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.499946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.499982] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.500175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.500220] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.500480] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.500512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.500642] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.500674] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.500870] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.500901] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.501012] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.501043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.501241] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.501274] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.501383] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.501415] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.501535] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.501567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.501694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.501724] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.501904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.501934] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.502175] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.502213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.502354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.502391] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.502572] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.502604] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.502747] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.502780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.502972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.503004] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.503195] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.503239] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.503363] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.503395] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.503513] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.503544] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.503677] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.503708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.503916] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.503949] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.504131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.504162] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.504351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.504385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.504509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.504543] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.504676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.504707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.504827] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.504865] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.504999] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.505030] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.505169] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.090 [2024-11-20 10:44:11.505213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.090 qpair failed and we were unable to recover it. 00:26:31.090 [2024-11-20 10:44:11.505386] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.505417] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.505530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.505561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.505685] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.505716] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.505847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.505879] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.505994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.506026] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.506294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.506327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.506457] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.506489] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.506609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.506640] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.506804] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.506836] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.506975] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.507007] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.507180] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.507222] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.507353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.507385] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.507496] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.507528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.507644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.507675] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.507778] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.507810] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.507926] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.507957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.508107] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.508138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.508265] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.508297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.508470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.508503] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.508624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.508658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.508777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.508808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.508960] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.508994] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.509109] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.509142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.509291] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.509324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.509479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.509524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.509652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.509685] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.509933] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.509965] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.510083] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.510115] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.510239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.510272] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.510446] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.510478] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.510590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.510622] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.510746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.510784] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.510896] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.510929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.511108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.511146] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.511376] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.511412] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.511536] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.511567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.511682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.511715] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.511835] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.511867] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.091 qpair failed and we were unable to recover it. 00:26:31.091 [2024-11-20 10:44:11.512050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.091 [2024-11-20 10:44:11.512081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.512270] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.512303] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.512428] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.512462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.512588] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.512620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.512754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.512786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.512904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.512936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.513060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.513099] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.513290] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.513324] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.513503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.513536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.513732] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.513766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.513878] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.513910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.514096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.514130] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.514343] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.514376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.514497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.514536] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.514667] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.514700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.514886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.514918] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.515033] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.515065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.515200] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.515242] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.515349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.515381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.515488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.515522] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.515734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.515766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.515961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.515993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.516116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.516148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.516348] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.516381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.516556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.516590] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.516714] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.516746] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.516970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.517005] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.517255] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.517290] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.517421] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.517456] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.517580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.517610] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.517724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.517757] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.517998] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.518031] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.518152] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.518185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.518310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.518342] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.518469] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.518502] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.518621] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.518653] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.518758] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.518793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.518979] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.519012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.092 qpair failed and we were unable to recover it. 00:26:31.092 [2024-11-20 10:44:11.519130] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.092 [2024-11-20 10:44:11.519161] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.519433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.519467] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.519654] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.519686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.519819] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.519852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.519966] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.519999] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.520137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.520170] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.520399] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.520437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.520580] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.520613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.520733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.520765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.520890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.520922] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.521134] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.521166] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.521294] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.521327] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.521440] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.521472] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.521651] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.521683] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.521793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.521825] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.522024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.522057] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.522191] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.522238] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.522353] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.522383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.522501] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.522532] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.522646] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.522681] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.522869] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.522899] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.523001] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.523032] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.523261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.523296] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.523420] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.523452] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.523569] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.523599] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.523741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.523773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.523890] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.523927] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.524049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.524083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.524239] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.524278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.524423] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.524454] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.524602] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.524634] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.524821] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.524853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.524967] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.524998] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.525128] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.525160] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.525284] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.093 [2024-11-20 10:44:11.525317] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.093 qpair failed and we were unable to recover it. 00:26:31.093 [2024-11-20 10:44:11.525495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.525531] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.525660] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.525692] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.525802] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.525834] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.525946] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.525977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.526163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.526194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.526350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.526383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.526503] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.526535] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.526638] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.526669] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf82ba0 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.526797] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.526832] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.527018] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.527053] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.527171] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.527215] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.527352] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.527383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.527509] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.527539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.527657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.527688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.527799] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.527831] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.527939] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.527971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.528106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.528139] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.528267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.528298] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.528403] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.528435] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.528550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.528581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.528703] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.528733] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.528846] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.528885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.529014] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.529045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.529146] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.529177] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.529301] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.529334] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.529452] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.529487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.529592] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.529623] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.529754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.529786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.529892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.529923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.530031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.530064] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.530245] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.530279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.530561] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.530594] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.530779] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.530811] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.530930] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.530961] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.531080] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.531111] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.531231] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.531265] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.531381] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.531413] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.094 [2024-11-20 10:44:11.531581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.094 [2024-11-20 10:44:11.531613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.094 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.531723] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.531755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.531866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.531898] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.532010] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.532042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.532154] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.532186] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.532321] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.532353] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.532530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.532562] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.532696] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.532728] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.532834] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.532866] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:31.095 [2024-11-20 10:44:11.533043] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.533077] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.533261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.533295] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:31.095 [2024-11-20 10:44:11.533484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.533518] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.533634] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.533666] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.533772] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.095 [2024-11-20 10:44:11.533805] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.533915] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.533946] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.095 [2024-11-20 10:44:11.534118] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.534151] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.534275] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.534308] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.534414] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.534446] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.534573] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.534605] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.534718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.534751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.534857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.534888] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.534994] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.535027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.535210] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.535244] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.535433] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.535465] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.535582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.535614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.535720] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.535751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.535855] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.535887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.536004] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.536035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.536137] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.536169] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.536289] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.536323] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.536431] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.536462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.536565] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.536596] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.536774] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.536806] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.536908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.536939] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.537050] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.537081] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.537189] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.537231] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.537351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.095 [2024-11-20 10:44:11.537383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.095 qpair failed and we were unable to recover it. 00:26:31.095 [2024-11-20 10:44:11.537582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.537613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.537740] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.537772] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.537892] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.537923] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.538036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.538068] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.538250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.538284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.538401] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.538433] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.538553] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.538585] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.538688] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.538720] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.538838] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.538869] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.538977] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.539008] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.539116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.539147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.539261] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.539293] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.539410] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.539447] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.539556] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.539587] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.539698] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.539730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.539858] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.539889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.540011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.540043] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.540168] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.540199] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.540346] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.540378] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.540549] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.540580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.540701] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.540732] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.540853] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.540885] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.541016] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.541048] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.541163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.541194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.541322] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.541354] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.541527] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.541557] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.541748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.541781] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.541888] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.541920] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.542048] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.542080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.542192] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.542234] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.542351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.542384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.542488] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.542520] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.542652] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.542684] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.542856] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.542887] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.542995] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.543027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.543143] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.543175] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.543315] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.543347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.096 [2024-11-20 10:44:11.543456] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.096 [2024-11-20 10:44:11.543487] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.096 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.543669] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.543700] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.543816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.543848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.543957] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.543988] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.544125] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.544157] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.544349] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.544381] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.544533] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.544565] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.544682] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.544714] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.544825] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.544856] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.545027] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.545059] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.545235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.545268] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.545444] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.545475] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.545593] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.545624] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.545734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.545766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.545882] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.545914] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.546036] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.546073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.546254] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.546288] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.546473] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.546504] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.546614] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.546645] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.546832] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.546864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.546972] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.547003] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.547116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.547148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.547259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.547292] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.547537] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.547568] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.547676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.547708] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.547887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.547919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.548034] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.548065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.548194] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.548233] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.548406] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.548437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.548618] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.548650] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.548759] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.548791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.548904] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.548935] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.549038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.549070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.549183] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.549224] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.549332] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.097 [2024-11-20 10:44:11.549364] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.097 qpair failed and we were unable to recover it. 00:26:31.097 [2024-11-20 10:44:11.549479] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.549510] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.549624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.549657] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.549773] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.549804] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.549925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.549956] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.550074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.550107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.550224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.550257] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.550371] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.550404] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.550515] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.550548] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.550718] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.550749] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.550861] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.550892] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.551081] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.551112] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.551240] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.551273] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.551385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.551416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.551529] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.551561] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.551739] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.551770] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.551885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.551916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.552028] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.552058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.552163] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.552195] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.552320] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.552350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.552460] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.552491] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.552694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.552731] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.552913] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.552944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.553059] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.553090] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.553222] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.553255] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.553377] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.553409] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.553583] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.553614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.553719] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.553751] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.553880] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.553911] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.554029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.554061] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.554186] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.554228] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.554419] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.554451] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.554571] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.554602] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.554781] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.554814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.554929] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.554960] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.555142] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.555174] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.555369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.555401] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.555506] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.098 [2024-11-20 10:44:11.555538] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.098 qpair failed and we were unable to recover it. 00:26:31.098 [2024-11-20 10:44:11.555724] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.555755] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.555864] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.555894] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.555996] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.556027] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.556236] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.556269] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.556447] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.556479] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.556655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.556686] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.556925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.556957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.557070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.557101] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.557233] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.557266] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.557398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.557430] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.557623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.557656] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.557777] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.557808] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.557925] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.557957] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.558067] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.558098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.558334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.558368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.558541] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.558574] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.558754] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.558786] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.559051] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.559083] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.559211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.559243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.559354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.559386] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.559495] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.559528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.559645] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.559676] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.559852] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.559884] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.560087] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.560127] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.560249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.560282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.560463] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.560494] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.560690] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.560722] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.560907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.560940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.561049] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.561080] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.561185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.561227] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.561400] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.561432] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.561609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.561641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.561748] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.561780] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.561897] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.561929] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.562108] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.562140] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.562264] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.562297] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.562483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.562516] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.562641] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.099 [2024-11-20 10:44:11.562672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.099 qpair failed and we were unable to recover it. 00:26:31.099 [2024-11-20 10:44:11.562796] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.562828] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.562958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.562989] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.563116] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.563148] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.563344] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.563376] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.563581] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.563613] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.563741] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.563773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.563879] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.563910] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.564096] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.564128] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.564312] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.564345] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.564450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.564481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.564655] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.564688] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.564822] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.564853] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.565031] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.565063] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.565235] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.565267] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.565388] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.565419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.565600] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.565632] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.565871] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.565902] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.566026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.566058] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.566164] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.566194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.566308] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.566340] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.566510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.566541] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 Malloc0 00:26:31.100 [2024-11-20 10:44:11.566734] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.566766] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.566885] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.566916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.567092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.567124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.567298] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.567332] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff260000b90 with addr=10.0.0.2, port=4420 00:26:31.100 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.567471] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.567512] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.567644] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.567677] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:26:31.100 [2024-11-20 10:44:11.567883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.567916] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.568045] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.568076] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.100 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.568188] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.568232] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.100 [2024-11-20 10:44:11.568411] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.568442] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.568550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.568581] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.568760] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.568791] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.568908] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.568940] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.569112] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.569144] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.569335] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.100 [2024-11-20 10:44:11.569368] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.100 qpair failed and we were unable to recover it. 00:26:31.100 [2024-11-20 10:44:11.569560] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.569592] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.569782] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.569814] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.570021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.570052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.570170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.570213] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.570334] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.570366] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.570497] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.570528] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.570715] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.570747] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.570938] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.570968] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.571211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.571245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.571368] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.571400] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.571582] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.571614] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.571798] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.571830] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.572003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.572034] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.572225] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.572259] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.572394] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.572437] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.572619] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.572651] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.572857] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.572889] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.573011] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.573042] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.573162] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.573194] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.573395] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.573427] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.573548] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.573579] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.573711] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.573741] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.573872] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.573903] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.574072] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.574087] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:31.101 [2024-11-20 10:44:11.574103] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.574219] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.574250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.574453] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.574485] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.574659] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.574691] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.574816] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.574851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.574970] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.575000] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.575258] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.575291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.575462] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.575493] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.575664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.575696] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.575810] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.575841] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.575961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.575993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.576105] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.576135] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.576328] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.576362] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.576483] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.576515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.101 [2024-11-20 10:44:11.576704] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.101 [2024-11-20 10:44:11.576736] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.101 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.576866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.576897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.577035] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.577066] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.577211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.577245] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.577385] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.577416] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.577520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.577552] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.577674] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.577705] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.577876] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.577906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.578024] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.578056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.578269] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.578302] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.578430] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.578460] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.578562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.578593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.578874] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.578906] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.579092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.579124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.579250] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.579282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.579465] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.579496] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.579609] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.579639] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.579817] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.579848] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.579956] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.579985] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.580089] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.580121] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.580253] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.580286] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.580470] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.580500] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.580610] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.580641] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.580833] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.580864] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.581054] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.581085] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.581216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.581248] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.581354] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.581387] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.581562] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.581593] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.581710] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.581740] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.581847] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.581878] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.582060] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.582098] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.582211] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.582243] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.582454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.582486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.582657] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 [2024-11-20 10:44:11.582689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.102 [2024-11-20 10:44:11.582795] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.102 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.102 [2024-11-20 10:44:11.582826] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.102 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.582952] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.582984] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.583092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.583122] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:31.103 [2024-11-20 10:44:11.583318] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.583350] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.103 [2024-11-20 10:44:11.583530] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.583563] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.583744] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.583777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.103 [2024-11-20 10:44:11.583887] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.583917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.584021] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.584052] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.584170] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.584210] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.584316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.584347] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.584454] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.584486] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.584679] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.584710] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.584830] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.584861] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.584988] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.585018] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.585155] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.585185] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.585316] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.585349] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.585589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.585621] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.585742] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.585773] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.585884] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.585915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.586026] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.586056] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.586185] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.586229] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff25c000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.586369] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.586419] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.586534] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.586567] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.586746] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.586776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.586889] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.586919] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.587029] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.587060] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.587249] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.587282] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.587472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.587505] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.587616] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.587647] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.587762] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.587793] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.587907] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.587937] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.588115] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.588147] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.588351] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.588384] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.588511] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.588542] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.588650] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.588690] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.588809] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.588839] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.588947] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.103 [2024-11-20 10:44:11.588977] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.103 qpair failed and we were unable to recover it. 00:26:31.103 [2024-11-20 10:44:11.589102] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.589133] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.589247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.589279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.589404] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.589436] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.589628] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.589658] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.589840] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.589871] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.589982] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.590012] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.590251] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.590284] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.590484] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.590515] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.590699] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.590730] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.104 [2024-11-20 10:44:11.590859] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.590891] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.591069] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.591107] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:31.104 [2024-11-20 10:44:11.591356] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.591390] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.591520] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.591551] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b9 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.104 0 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.591733] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.591765] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.104 [2024-11-20 10:44:11.591950] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.591981] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.592092] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.592124] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.592398] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.592431] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.592605] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.592636] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.592820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.592851] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.592961] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.592993] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.593119] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.593150] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.593350] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.593383] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.593510] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.593555] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.593676] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.593707] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.593831] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.593863] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.594106] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.594138] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.594259] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.594291] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.594407] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.594439] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.594550] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.594580] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.594694] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.594726] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.594905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.594936] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.595074] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.595105] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.595224] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.595258] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.595439] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.595470] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.595589] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.595620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.595793] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.595824] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.104 qpair failed and we were unable to recover it. 00:26:31.104 [2024-11-20 10:44:11.596070] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.104 [2024-11-20 10:44:11.596102] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.596274] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.596307] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.596492] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.596524] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.596640] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.596672] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.596791] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.596822] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.597003] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.597035] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.597232] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.597264] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.597425] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.597462] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.597658] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.597689] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.597866] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.597897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.598013] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.598045] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.598247] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.598279] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.598391] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.598422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.598599] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.598631] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.598820] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.105 [2024-11-20 10:44:11.598852] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.599042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.599073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.599216] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.599250] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.599390] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.599422] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.105 [2024-11-20 10:44:11.599623] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.599655] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.599770] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.599802] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.105 [2024-11-20 10:44:11.599912] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.599944] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.600076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.600108] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.600304] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.600337] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.600450] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.600481] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.600590] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.600627] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.600745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.600777] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.601042] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.601073] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.601246] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.601278] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.601382] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.601414] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.601586] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.601618] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.601745] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.601776] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.601883] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.601915] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.602039] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.602070] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.602267] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:26:31.105 [2024-11-20 10:44:11.602300] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7ff268000b90 with addr=10.0.0.2, port=4420 00:26:31.105 [2024-11-20 10:44:11.602313] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.105 qpair failed and we were unable to recover it. 00:26:31.105 [2024-11-20 10:44:11.604840] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.105 [2024-11-20 10:44:11.604973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.105 [2024-11-20 10:44:11.605019] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.105 [2024-11-20 10:44:11.605043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.105 [2024-11-20 10:44:11.605063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.106 [2024-11-20 10:44:11.605115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.106 qpair failed and we were unable to recover it. 00:26:31.106 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.106 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:26:31.106 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:31.106 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:31.106 [2024-11-20 10:44:11.614673] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.106 [2024-11-20 10:44:11.614768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.106 [2024-11-20 10:44:11.614801] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.106 [2024-11-20 10:44:11.614818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.106 [2024-11-20 10:44:11.614836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.106 [2024-11-20 10:44:11.614874] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.106 qpair failed and we were unable to recover it. 00:26:31.106 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:31.106 10:44:11 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 3363905 00:26:31.106 [2024-11-20 10:44:11.624724] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.106 [2024-11-20 10:44:11.624791] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.106 [2024-11-20 10:44:11.624814] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.106 [2024-11-20 10:44:11.624825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.106 [2024-11-20 10:44:11.624836] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.106 [2024-11-20 10:44:11.624859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.106 qpair failed and we were unable to recover it. 00:26:31.106 [2024-11-20 10:44:11.634704] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.106 [2024-11-20 10:44:11.634768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.106 [2024-11-20 10:44:11.634783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.106 [2024-11-20 10:44:11.634792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.106 [2024-11-20 10:44:11.634799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.106 [2024-11-20 10:44:11.634815] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.106 qpair failed and we were unable to recover it. 00:26:31.106 [2024-11-20 10:44:11.644647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.106 [2024-11-20 10:44:11.644745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.106 [2024-11-20 10:44:11.644759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.106 [2024-11-20 10:44:11.644769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.106 [2024-11-20 10:44:11.644775] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.106 [2024-11-20 10:44:11.644791] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.106 qpair failed and we were unable to recover it. 00:26:31.106 [2024-11-20 10:44:11.654603] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.106 [2024-11-20 10:44:11.654657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.106 [2024-11-20 10:44:11.654671] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.106 [2024-11-20 10:44:11.654678] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.106 [2024-11-20 10:44:11.654685] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.106 [2024-11-20 10:44:11.654700] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.106 qpair failed and we were unable to recover it. 00:26:31.106 [2024-11-20 10:44:11.664699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.106 [2024-11-20 10:44:11.664748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.106 [2024-11-20 10:44:11.664763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.106 [2024-11-20 10:44:11.664771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.106 [2024-11-20 10:44:11.664777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.106 [2024-11-20 10:44:11.664792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.106 qpair failed and we were unable to recover it. 00:26:31.106 [2024-11-20 10:44:11.674734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.106 [2024-11-20 10:44:11.674792] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.106 [2024-11-20 10:44:11.674806] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.106 [2024-11-20 10:44:11.674813] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.106 [2024-11-20 10:44:11.674820] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.106 [2024-11-20 10:44:11.674836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.106 qpair failed and we were unable to recover it. 00:26:31.106 [2024-11-20 10:44:11.684787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.106 [2024-11-20 10:44:11.684839] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.106 [2024-11-20 10:44:11.684854] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.106 [2024-11-20 10:44:11.684862] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.106 [2024-11-20 10:44:11.684868] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.106 [2024-11-20 10:44:11.684887] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.106 qpair failed and we were unable to recover it. 00:26:31.106 [2024-11-20 10:44:11.694806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.106 [2024-11-20 10:44:11.694859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.106 [2024-11-20 10:44:11.694874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.106 [2024-11-20 10:44:11.694881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.106 [2024-11-20 10:44:11.694888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.106 [2024-11-20 10:44:11.694904] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.106 qpair failed and we were unable to recover it. 00:26:31.106 [2024-11-20 10:44:11.704843] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.106 [2024-11-20 10:44:11.704895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.106 [2024-11-20 10:44:11.704910] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.106 [2024-11-20 10:44:11.704917] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.106 [2024-11-20 10:44:11.704924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.106 [2024-11-20 10:44:11.704939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.106 qpair failed and we were unable to recover it. 00:26:31.106 [2024-11-20 10:44:11.714855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.106 [2024-11-20 10:44:11.714912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.106 [2024-11-20 10:44:11.714927] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.106 [2024-11-20 10:44:11.714934] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.106 [2024-11-20 10:44:11.714941] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.106 [2024-11-20 10:44:11.714956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.106 qpair failed and we were unable to recover it. 00:26:31.106 [2024-11-20 10:44:11.724897] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.106 [2024-11-20 10:44:11.724951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.107 [2024-11-20 10:44:11.724965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.107 [2024-11-20 10:44:11.724972] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.107 [2024-11-20 10:44:11.724979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.107 [2024-11-20 10:44:11.724994] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.107 qpair failed and we were unable to recover it. 00:26:31.107 [2024-11-20 10:44:11.734881] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.107 [2024-11-20 10:44:11.734965] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.107 [2024-11-20 10:44:11.734980] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.107 [2024-11-20 10:44:11.734987] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.107 [2024-11-20 10:44:11.734993] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.107 [2024-11-20 10:44:11.735008] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.107 qpair failed and we were unable to recover it. 00:26:31.107 [2024-11-20 10:44:11.744955] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.107 [2024-11-20 10:44:11.745022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.107 [2024-11-20 10:44:11.745037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.107 [2024-11-20 10:44:11.745043] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.107 [2024-11-20 10:44:11.745049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.107 [2024-11-20 10:44:11.745065] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.107 qpair failed and we were unable to recover it. 00:26:31.107 [2024-11-20 10:44:11.754972] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.107 [2024-11-20 10:44:11.755026] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.107 [2024-11-20 10:44:11.755040] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.107 [2024-11-20 10:44:11.755047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.107 [2024-11-20 10:44:11.755054] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.107 [2024-11-20 10:44:11.755069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.107 qpair failed and we were unable to recover it. 00:26:31.107 [2024-11-20 10:44:11.764992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.107 [2024-11-20 10:44:11.765049] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.107 [2024-11-20 10:44:11.765064] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.107 [2024-11-20 10:44:11.765072] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.107 [2024-11-20 10:44:11.765079] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.107 [2024-11-20 10:44:11.765094] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.107 qpair failed and we were unable to recover it. 00:26:31.107 [2024-11-20 10:44:11.775041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.107 [2024-11-20 10:44:11.775125] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.107 [2024-11-20 10:44:11.775140] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.107 [2024-11-20 10:44:11.775150] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.107 [2024-11-20 10:44:11.775156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.107 [2024-11-20 10:44:11.775172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.107 qpair failed and we were unable to recover it. 00:26:31.107 [2024-11-20 10:44:11.785056] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.107 [2024-11-20 10:44:11.785110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.107 [2024-11-20 10:44:11.785125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.107 [2024-11-20 10:44:11.785132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.107 [2024-11-20 10:44:11.785138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.107 [2024-11-20 10:44:11.785154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.107 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-20 10:44:11.795065] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.368 [2024-11-20 10:44:11.795158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.368 [2024-11-20 10:44:11.795172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.368 [2024-11-20 10:44:11.795179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.368 [2024-11-20 10:44:11.795186] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.368 [2024-11-20 10:44:11.795205] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-20 10:44:11.805112] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.368 [2024-11-20 10:44:11.805167] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.368 [2024-11-20 10:44:11.805181] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.368 [2024-11-20 10:44:11.805188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.368 [2024-11-20 10:44:11.805195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.368 [2024-11-20 10:44:11.805217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-20 10:44:11.815141] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.368 [2024-11-20 10:44:11.815200] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.368 [2024-11-20 10:44:11.815220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.368 [2024-11-20 10:44:11.815228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.368 [2024-11-20 10:44:11.815234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.368 [2024-11-20 10:44:11.815253] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-20 10:44:11.825166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.368 [2024-11-20 10:44:11.825224] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.368 [2024-11-20 10:44:11.825238] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.368 [2024-11-20 10:44:11.825246] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.368 [2024-11-20 10:44:11.825252] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.368 [2024-11-20 10:44:11.825267] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-20 10:44:11.835207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.368 [2024-11-20 10:44:11.835262] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.368 [2024-11-20 10:44:11.835276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.368 [2024-11-20 10:44:11.835283] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.368 [2024-11-20 10:44:11.835289] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.368 [2024-11-20 10:44:11.835304] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-20 10:44:11.845152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.368 [2024-11-20 10:44:11.845208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.368 [2024-11-20 10:44:11.845221] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.368 [2024-11-20 10:44:11.845228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.368 [2024-11-20 10:44:11.845234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.368 [2024-11-20 10:44:11.845248] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-20 10:44:11.855166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.368 [2024-11-20 10:44:11.855270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.368 [2024-11-20 10:44:11.855284] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.368 [2024-11-20 10:44:11.855291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.368 [2024-11-20 10:44:11.855298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.368 [2024-11-20 10:44:11.855313] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-20 10:44:11.865271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.368 [2024-11-20 10:44:11.865327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.368 [2024-11-20 10:44:11.865342] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.368 [2024-11-20 10:44:11.865350] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.368 [2024-11-20 10:44:11.865356] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.368 [2024-11-20 10:44:11.865372] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-20 10:44:11.875314] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.368 [2024-11-20 10:44:11.875368] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.368 [2024-11-20 10:44:11.875382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.368 [2024-11-20 10:44:11.875389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.368 [2024-11-20 10:44:11.875396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.368 [2024-11-20 10:44:11.875411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-20 10:44:11.885378] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.368 [2024-11-20 10:44:11.885445] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.368 [2024-11-20 10:44:11.885459] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.368 [2024-11-20 10:44:11.885467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.368 [2024-11-20 10:44:11.885473] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.368 [2024-11-20 10:44:11.885488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.368 qpair failed and we were unable to recover it. 00:26:31.368 [2024-11-20 10:44:11.895361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.369 [2024-11-20 10:44:11.895422] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.369 [2024-11-20 10:44:11.895436] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.369 [2024-11-20 10:44:11.895443] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.369 [2024-11-20 10:44:11.895449] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.369 [2024-11-20 10:44:11.895464] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-20 10:44:11.905377] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.369 [2024-11-20 10:44:11.905439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.369 [2024-11-20 10:44:11.905456] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.369 [2024-11-20 10:44:11.905464] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.369 [2024-11-20 10:44:11.905469] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.369 [2024-11-20 10:44:11.905484] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-20 10:44:11.915428] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.369 [2024-11-20 10:44:11.915488] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.369 [2024-11-20 10:44:11.915502] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.369 [2024-11-20 10:44:11.915510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.369 [2024-11-20 10:44:11.915516] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.369 [2024-11-20 10:44:11.915532] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-20 10:44:11.925517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.369 [2024-11-20 10:44:11.925572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.369 [2024-11-20 10:44:11.925587] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.369 [2024-11-20 10:44:11.925594] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.369 [2024-11-20 10:44:11.925600] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.369 [2024-11-20 10:44:11.925616] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-20 10:44:11.935475] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.369 [2024-11-20 10:44:11.935533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.369 [2024-11-20 10:44:11.935547] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.369 [2024-11-20 10:44:11.935554] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.369 [2024-11-20 10:44:11.935561] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.369 [2024-11-20 10:44:11.935576] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-20 10:44:11.945500] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.369 [2024-11-20 10:44:11.945556] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.369 [2024-11-20 10:44:11.945569] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.369 [2024-11-20 10:44:11.945576] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.369 [2024-11-20 10:44:11.945586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.369 [2024-11-20 10:44:11.945601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-20 10:44:11.955556] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.369 [2024-11-20 10:44:11.955614] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.369 [2024-11-20 10:44:11.955628] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.369 [2024-11-20 10:44:11.955635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.369 [2024-11-20 10:44:11.955642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.369 [2024-11-20 10:44:11.955657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-20 10:44:11.965558] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.369 [2024-11-20 10:44:11.965612] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.369 [2024-11-20 10:44:11.965626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.369 [2024-11-20 10:44:11.965633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.369 [2024-11-20 10:44:11.965640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.369 [2024-11-20 10:44:11.965655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-20 10:44:11.975631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.369 [2024-11-20 10:44:11.975686] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.369 [2024-11-20 10:44:11.975700] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.369 [2024-11-20 10:44:11.975707] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.369 [2024-11-20 10:44:11.975714] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.369 [2024-11-20 10:44:11.975729] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-20 10:44:11.985604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.369 [2024-11-20 10:44:11.985654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.369 [2024-11-20 10:44:11.985668] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.369 [2024-11-20 10:44:11.985675] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.369 [2024-11-20 10:44:11.985681] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.369 [2024-11-20 10:44:11.985696] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-20 10:44:11.995646] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.369 [2024-11-20 10:44:11.995706] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.369 [2024-11-20 10:44:11.995720] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.369 [2024-11-20 10:44:11.995727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.369 [2024-11-20 10:44:11.995734] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.369 [2024-11-20 10:44:11.995749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-20 10:44:12.005592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.369 [2024-11-20 10:44:12.005649] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.369 [2024-11-20 10:44:12.005662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.369 [2024-11-20 10:44:12.005670] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.369 [2024-11-20 10:44:12.005676] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.369 [2024-11-20 10:44:12.005691] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-20 10:44:12.015707] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.369 [2024-11-20 10:44:12.015763] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.369 [2024-11-20 10:44:12.015777] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.369 [2024-11-20 10:44:12.015785] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.369 [2024-11-20 10:44:12.015791] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.369 [2024-11-20 10:44:12.015806] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.369 qpair failed and we were unable to recover it. 00:26:31.369 [2024-11-20 10:44:12.025740] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.370 [2024-11-20 10:44:12.025795] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.370 [2024-11-20 10:44:12.025809] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.370 [2024-11-20 10:44:12.025816] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.370 [2024-11-20 10:44:12.025823] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.370 [2024-11-20 10:44:12.025838] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-20 10:44:12.035765] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.370 [2024-11-20 10:44:12.035844] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.370 [2024-11-20 10:44:12.035861] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.370 [2024-11-20 10:44:12.035868] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.370 [2024-11-20 10:44:12.035874] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.370 [2024-11-20 10:44:12.035889] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-20 10:44:12.045801] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.370 [2024-11-20 10:44:12.045858] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.370 [2024-11-20 10:44:12.045872] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.370 [2024-11-20 10:44:12.045881] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.370 [2024-11-20 10:44:12.045887] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.370 [2024-11-20 10:44:12.045903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-20 10:44:12.055810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.370 [2024-11-20 10:44:12.055883] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.370 [2024-11-20 10:44:12.055897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.370 [2024-11-20 10:44:12.055904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.370 [2024-11-20 10:44:12.055910] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.370 [2024-11-20 10:44:12.055926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-20 10:44:12.065823] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.370 [2024-11-20 10:44:12.065876] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.370 [2024-11-20 10:44:12.065891] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.370 [2024-11-20 10:44:12.065899] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.370 [2024-11-20 10:44:12.065906] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.370 [2024-11-20 10:44:12.065921] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-20 10:44:12.075860] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.370 [2024-11-20 10:44:12.075921] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.370 [2024-11-20 10:44:12.075935] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.370 [2024-11-20 10:44:12.075942] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.370 [2024-11-20 10:44:12.075952] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.370 [2024-11-20 10:44:12.075966] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.370 [2024-11-20 10:44:12.085889] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.370 [2024-11-20 10:44:12.085943] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.370 [2024-11-20 10:44:12.085957] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.370 [2024-11-20 10:44:12.085963] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.370 [2024-11-20 10:44:12.085970] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.370 [2024-11-20 10:44:12.085985] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.370 qpair failed and we were unable to recover it. 00:26:31.630 [2024-11-20 10:44:12.095918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.630 [2024-11-20 10:44:12.095974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.630 [2024-11-20 10:44:12.095988] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.630 [2024-11-20 10:44:12.095996] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.630 [2024-11-20 10:44:12.096002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.630 [2024-11-20 10:44:12.096017] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.630 qpair failed and we were unable to recover it. 00:26:31.630 [2024-11-20 10:44:12.105934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.630 [2024-11-20 10:44:12.105991] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.630 [2024-11-20 10:44:12.106006] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.630 [2024-11-20 10:44:12.106013] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.630 [2024-11-20 10:44:12.106020] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.630 [2024-11-20 10:44:12.106035] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.630 qpair failed and we were unable to recover it. 00:26:31.630 [2024-11-20 10:44:12.115899] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.630 [2024-11-20 10:44:12.115959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.630 [2024-11-20 10:44:12.115973] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.630 [2024-11-20 10:44:12.115981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.630 [2024-11-20 10:44:12.115987] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.630 [2024-11-20 10:44:12.116004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.630 qpair failed and we were unable to recover it. 00:26:31.630 [2024-11-20 10:44:12.125992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.630 [2024-11-20 10:44:12.126047] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.630 [2024-11-20 10:44:12.126061] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.630 [2024-11-20 10:44:12.126068] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.630 [2024-11-20 10:44:12.126075] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.630 [2024-11-20 10:44:12.126090] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.630 qpair failed and we were unable to recover it. 00:26:31.630 [2024-11-20 10:44:12.136074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.630 [2024-11-20 10:44:12.136129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.630 [2024-11-20 10:44:12.136143] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.630 [2024-11-20 10:44:12.136149] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.630 [2024-11-20 10:44:12.136156] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.630 [2024-11-20 10:44:12.136171] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.630 qpair failed and we were unable to recover it. 00:26:31.630 [2024-11-20 10:44:12.146082] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.630 [2024-11-20 10:44:12.146136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.630 [2024-11-20 10:44:12.146149] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.630 [2024-11-20 10:44:12.146156] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.630 [2024-11-20 10:44:12.146163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.630 [2024-11-20 10:44:12.146178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.630 qpair failed and we were unable to recover it. 00:26:31.630 [2024-11-20 10:44:12.156090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.630 [2024-11-20 10:44:12.156143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.631 [2024-11-20 10:44:12.156157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.631 [2024-11-20 10:44:12.156164] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.631 [2024-11-20 10:44:12.156170] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.631 [2024-11-20 10:44:12.156185] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.631 qpair failed and we were unable to recover it. 00:26:31.631 [2024-11-20 10:44:12.166116] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.631 [2024-11-20 10:44:12.166176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.631 [2024-11-20 10:44:12.166196] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.631 [2024-11-20 10:44:12.166208] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.631 [2024-11-20 10:44:12.166215] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.631 [2024-11-20 10:44:12.166230] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.631 qpair failed and we were unable to recover it. 00:26:31.631 [2024-11-20 10:44:12.176131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.631 [2024-11-20 10:44:12.176185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.631 [2024-11-20 10:44:12.176200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.631 [2024-11-20 10:44:12.176212] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.631 [2024-11-20 10:44:12.176218] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.631 [2024-11-20 10:44:12.176234] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.631 qpair failed and we were unable to recover it. 00:26:31.631 [2024-11-20 10:44:12.186169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.631 [2024-11-20 10:44:12.186228] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.631 [2024-11-20 10:44:12.186241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.631 [2024-11-20 10:44:12.186248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.631 [2024-11-20 10:44:12.186255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.631 [2024-11-20 10:44:12.186270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.631 qpair failed and we were unable to recover it. 00:26:31.631 [2024-11-20 10:44:12.196209] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.631 [2024-11-20 10:44:12.196264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.631 [2024-11-20 10:44:12.196277] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.631 [2024-11-20 10:44:12.196284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.631 [2024-11-20 10:44:12.196290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.631 [2024-11-20 10:44:12.196306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.631 qpair failed and we were unable to recover it. 00:26:31.631 [2024-11-20 10:44:12.206231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.631 [2024-11-20 10:44:12.206288] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.631 [2024-11-20 10:44:12.206302] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.631 [2024-11-20 10:44:12.206313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.631 [2024-11-20 10:44:12.206319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.631 [2024-11-20 10:44:12.206334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.631 qpair failed and we were unable to recover it. 00:26:31.631 [2024-11-20 10:44:12.216242] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.631 [2024-11-20 10:44:12.216297] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.631 [2024-11-20 10:44:12.216311] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.631 [2024-11-20 10:44:12.216319] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.631 [2024-11-20 10:44:12.216326] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.631 [2024-11-20 10:44:12.216341] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.631 qpair failed and we were unable to recover it. 00:26:31.631 [2024-11-20 10:44:12.226298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.631 [2024-11-20 10:44:12.226352] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.631 [2024-11-20 10:44:12.226366] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.631 [2024-11-20 10:44:12.226373] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.631 [2024-11-20 10:44:12.226380] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.631 [2024-11-20 10:44:12.226396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.631 qpair failed and we were unable to recover it. 00:26:31.631 [2024-11-20 10:44:12.236345] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.631 [2024-11-20 10:44:12.236405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.631 [2024-11-20 10:44:12.236418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.631 [2024-11-20 10:44:12.236425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.631 [2024-11-20 10:44:12.236432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.631 [2024-11-20 10:44:12.236447] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.631 qpair failed and we were unable to recover it. 00:26:31.631 [2024-11-20 10:44:12.246360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.631 [2024-11-20 10:44:12.246416] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.631 [2024-11-20 10:44:12.246430] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.631 [2024-11-20 10:44:12.246437] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.631 [2024-11-20 10:44:12.246444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.631 [2024-11-20 10:44:12.246463] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.631 qpair failed and we were unable to recover it. 00:26:31.631 [2024-11-20 10:44:12.256376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.631 [2024-11-20 10:44:12.256430] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.631 [2024-11-20 10:44:12.256443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.631 [2024-11-20 10:44:12.256450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.631 [2024-11-20 10:44:12.256456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.631 [2024-11-20 10:44:12.256471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.631 qpair failed and we were unable to recover it. 00:26:31.631 [2024-11-20 10:44:12.266403] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.631 [2024-11-20 10:44:12.266456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.631 [2024-11-20 10:44:12.266471] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.631 [2024-11-20 10:44:12.266478] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.631 [2024-11-20 10:44:12.266485] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.631 [2024-11-20 10:44:12.266501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.631 qpair failed and we were unable to recover it. 00:26:31.631 [2024-11-20 10:44:12.276441] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.631 [2024-11-20 10:44:12.276514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.631 [2024-11-20 10:44:12.276529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.631 [2024-11-20 10:44:12.276536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.631 [2024-11-20 10:44:12.276542] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.631 [2024-11-20 10:44:12.276558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.631 qpair failed and we were unable to recover it. 00:26:31.632 [2024-11-20 10:44:12.286499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.632 [2024-11-20 10:44:12.286555] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.632 [2024-11-20 10:44:12.286568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.632 [2024-11-20 10:44:12.286575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.632 [2024-11-20 10:44:12.286582] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.632 [2024-11-20 10:44:12.286598] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.632 qpair failed and we were unable to recover it. 00:26:31.632 [2024-11-20 10:44:12.296492] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.632 [2024-11-20 10:44:12.296553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.632 [2024-11-20 10:44:12.296568] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.632 [2024-11-20 10:44:12.296575] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.632 [2024-11-20 10:44:12.296583] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.632 [2024-11-20 10:44:12.296600] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.632 qpair failed and we were unable to recover it. 00:26:31.632 [2024-11-20 10:44:12.306488] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.632 [2024-11-20 10:44:12.306542] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.632 [2024-11-20 10:44:12.306556] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.632 [2024-11-20 10:44:12.306563] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.632 [2024-11-20 10:44:12.306569] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.632 [2024-11-20 10:44:12.306584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.632 qpair failed and we were unable to recover it. 00:26:31.632 [2024-11-20 10:44:12.316551] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.632 [2024-11-20 10:44:12.316604] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.632 [2024-11-20 10:44:12.316619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.632 [2024-11-20 10:44:12.316626] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.632 [2024-11-20 10:44:12.316632] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.632 [2024-11-20 10:44:12.316647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.632 qpair failed and we were unable to recover it. 00:26:31.632 [2024-11-20 10:44:12.326601] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.632 [2024-11-20 10:44:12.326657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.632 [2024-11-20 10:44:12.326670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.632 [2024-11-20 10:44:12.326677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.632 [2024-11-20 10:44:12.326683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.632 [2024-11-20 10:44:12.326699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.632 qpair failed and we were unable to recover it. 00:26:31.632 [2024-11-20 10:44:12.336637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.632 [2024-11-20 10:44:12.336689] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.632 [2024-11-20 10:44:12.336703] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.632 [2024-11-20 10:44:12.336715] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.632 [2024-11-20 10:44:12.336722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.632 [2024-11-20 10:44:12.336737] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.632 qpair failed and we were unable to recover it. 00:26:31.632 [2024-11-20 10:44:12.346625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.632 [2024-11-20 10:44:12.346680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.632 [2024-11-20 10:44:12.346694] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.632 [2024-11-20 10:44:12.346702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.632 [2024-11-20 10:44:12.346708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.632 [2024-11-20 10:44:12.346724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.632 qpair failed and we were unable to recover it. 00:26:31.632 [2024-11-20 10:44:12.356652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.632 [2024-11-20 10:44:12.356708] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.632 [2024-11-20 10:44:12.356724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.891 [2024-11-20 10:44:12.356733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.892 [2024-11-20 10:44:12.356740] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.892 [2024-11-20 10:44:12.356756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.892 qpair failed and we were unable to recover it. 00:26:31.892 [2024-11-20 10:44:12.366676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.892 [2024-11-20 10:44:12.366731] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.892 [2024-11-20 10:44:12.366745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.892 [2024-11-20 10:44:12.366753] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.892 [2024-11-20 10:44:12.366759] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.892 [2024-11-20 10:44:12.366774] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.892 qpair failed and we were unable to recover it. 00:26:31.892 [2024-11-20 10:44:12.376696] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.892 [2024-11-20 10:44:12.376749] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.892 [2024-11-20 10:44:12.376763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.892 [2024-11-20 10:44:12.376769] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.892 [2024-11-20 10:44:12.376776] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.892 [2024-11-20 10:44:12.376794] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.892 qpair failed and we were unable to recover it. 00:26:31.892 [2024-11-20 10:44:12.386730] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.892 [2024-11-20 10:44:12.386786] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.892 [2024-11-20 10:44:12.386800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.892 [2024-11-20 10:44:12.386807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.892 [2024-11-20 10:44:12.386814] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.892 [2024-11-20 10:44:12.386830] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.892 qpair failed and we were unable to recover it. 00:26:31.892 [2024-11-20 10:44:12.396777] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.892 [2024-11-20 10:44:12.396835] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.892 [2024-11-20 10:44:12.396849] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.892 [2024-11-20 10:44:12.396857] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.892 [2024-11-20 10:44:12.396864] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.892 [2024-11-20 10:44:12.396879] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.892 qpair failed and we were unable to recover it. 00:26:31.892 [2024-11-20 10:44:12.406786] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.892 [2024-11-20 10:44:12.406863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.892 [2024-11-20 10:44:12.406877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.892 [2024-11-20 10:44:12.406884] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.892 [2024-11-20 10:44:12.406891] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.892 [2024-11-20 10:44:12.406906] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.892 qpair failed and we were unable to recover it. 00:26:31.892 [2024-11-20 10:44:12.416941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.892 [2024-11-20 10:44:12.417007] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.892 [2024-11-20 10:44:12.417021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.892 [2024-11-20 10:44:12.417029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.892 [2024-11-20 10:44:12.417036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.892 [2024-11-20 10:44:12.417051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.892 qpair failed and we were unable to recover it. 00:26:31.892 [2024-11-20 10:44:12.426898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.892 [2024-11-20 10:44:12.426954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.892 [2024-11-20 10:44:12.426968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.892 [2024-11-20 10:44:12.426977] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.892 [2024-11-20 10:44:12.426983] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.892 [2024-11-20 10:44:12.426998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.892 qpair failed and we were unable to recover it. 00:26:31.892 [2024-11-20 10:44:12.436947] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.892 [2024-11-20 10:44:12.437008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.892 [2024-11-20 10:44:12.437022] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.892 [2024-11-20 10:44:12.437029] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.892 [2024-11-20 10:44:12.437036] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.892 [2024-11-20 10:44:12.437051] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.892 qpair failed and we were unable to recover it. 00:26:31.892 [2024-11-20 10:44:12.446954] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.892 [2024-11-20 10:44:12.447010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.892 [2024-11-20 10:44:12.447024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.892 [2024-11-20 10:44:12.447031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.892 [2024-11-20 10:44:12.447039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.892 [2024-11-20 10:44:12.447053] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.892 qpair failed and we were unable to recover it. 00:26:31.892 [2024-11-20 10:44:12.456953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.892 [2024-11-20 10:44:12.457010] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.892 [2024-11-20 10:44:12.457024] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.892 [2024-11-20 10:44:12.457031] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.892 [2024-11-20 10:44:12.457038] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.892 [2024-11-20 10:44:12.457052] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.892 qpair failed and we were unable to recover it. 00:26:31.892 [2024-11-20 10:44:12.466964] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.892 [2024-11-20 10:44:12.467018] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.892 [2024-11-20 10:44:12.467037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.892 [2024-11-20 10:44:12.467045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.892 [2024-11-20 10:44:12.467051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.892 [2024-11-20 10:44:12.467066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.892 qpair failed and we were unable to recover it. 00:26:31.892 [2024-11-20 10:44:12.477001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.892 [2024-11-20 10:44:12.477065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.892 [2024-11-20 10:44:12.477111] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.893 [2024-11-20 10:44:12.477119] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.893 [2024-11-20 10:44:12.477125] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.893 [2024-11-20 10:44:12.477149] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.893 qpair failed and we were unable to recover it. 00:26:31.893 [2024-11-20 10:44:12.487026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.893 [2024-11-20 10:44:12.487081] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.893 [2024-11-20 10:44:12.487095] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.893 [2024-11-20 10:44:12.487102] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.893 [2024-11-20 10:44:12.487108] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.893 [2024-11-20 10:44:12.487124] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.893 qpair failed and we were unable to recover it. 00:26:31.893 [2024-11-20 10:44:12.497041] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.893 [2024-11-20 10:44:12.497119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.893 [2024-11-20 10:44:12.497134] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.893 [2024-11-20 10:44:12.497141] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.893 [2024-11-20 10:44:12.497148] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.893 [2024-11-20 10:44:12.497163] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.893 qpair failed and we were unable to recover it. 00:26:31.893 [2024-11-20 10:44:12.507069] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.893 [2024-11-20 10:44:12.507119] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.893 [2024-11-20 10:44:12.507133] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.893 [2024-11-20 10:44:12.507140] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.893 [2024-11-20 10:44:12.507150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.893 [2024-11-20 10:44:12.507165] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.893 qpair failed and we were unable to recover it. 00:26:31.893 [2024-11-20 10:44:12.517108] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.893 [2024-11-20 10:44:12.517162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.893 [2024-11-20 10:44:12.517177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.893 [2024-11-20 10:44:12.517185] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.893 [2024-11-20 10:44:12.517191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.893 [2024-11-20 10:44:12.517211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.893 qpair failed and we were unable to recover it. 00:26:31.893 [2024-11-20 10:44:12.527146] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.893 [2024-11-20 10:44:12.527234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.893 [2024-11-20 10:44:12.527248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.893 [2024-11-20 10:44:12.527256] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.893 [2024-11-20 10:44:12.527262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.893 [2024-11-20 10:44:12.527278] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.893 qpair failed and we were unable to recover it. 00:26:31.893 [2024-11-20 10:44:12.537160] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.893 [2024-11-20 10:44:12.537218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.893 [2024-11-20 10:44:12.537232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.893 [2024-11-20 10:44:12.537239] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.893 [2024-11-20 10:44:12.537246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.893 [2024-11-20 10:44:12.537261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.893 qpair failed and we were unable to recover it. 00:26:31.893 [2024-11-20 10:44:12.547186] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.893 [2024-11-20 10:44:12.547241] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.893 [2024-11-20 10:44:12.547256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.893 [2024-11-20 10:44:12.547264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.893 [2024-11-20 10:44:12.547271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.893 [2024-11-20 10:44:12.547286] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.893 qpair failed and we were unable to recover it. 00:26:31.893 [2024-11-20 10:44:12.557233] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.893 [2024-11-20 10:44:12.557290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.893 [2024-11-20 10:44:12.557304] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.893 [2024-11-20 10:44:12.557311] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.893 [2024-11-20 10:44:12.557318] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.893 [2024-11-20 10:44:12.557333] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.893 qpair failed and we were unable to recover it. 00:26:31.893 [2024-11-20 10:44:12.567257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.893 [2024-11-20 10:44:12.567313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.893 [2024-11-20 10:44:12.567328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.893 [2024-11-20 10:44:12.567335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.893 [2024-11-20 10:44:12.567342] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.893 [2024-11-20 10:44:12.567358] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.893 qpair failed and we were unable to recover it. 00:26:31.893 [2024-11-20 10:44:12.577271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.893 [2024-11-20 10:44:12.577326] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.893 [2024-11-20 10:44:12.577340] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.893 [2024-11-20 10:44:12.577348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.893 [2024-11-20 10:44:12.577354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.893 [2024-11-20 10:44:12.577370] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.893 qpair failed and we were unable to recover it. 00:26:31.893 [2024-11-20 10:44:12.587312] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.893 [2024-11-20 10:44:12.587372] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.893 [2024-11-20 10:44:12.587386] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.893 [2024-11-20 10:44:12.587393] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.893 [2024-11-20 10:44:12.587400] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.893 [2024-11-20 10:44:12.587416] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.893 qpair failed and we were unable to recover it. 00:26:31.894 [2024-11-20 10:44:12.597352] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.894 [2024-11-20 10:44:12.597408] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.894 [2024-11-20 10:44:12.597424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.894 [2024-11-20 10:44:12.597432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.894 [2024-11-20 10:44:12.597438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.894 [2024-11-20 10:44:12.597453] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.894 qpair failed and we were unable to recover it. 00:26:31.894 [2024-11-20 10:44:12.607388] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.894 [2024-11-20 10:44:12.607439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.894 [2024-11-20 10:44:12.607453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.894 [2024-11-20 10:44:12.607460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.894 [2024-11-20 10:44:12.607467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.894 [2024-11-20 10:44:12.607482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.894 qpair failed and we were unable to recover it. 00:26:31.894 [2024-11-20 10:44:12.617455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:31.894 [2024-11-20 10:44:12.617514] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:31.894 [2024-11-20 10:44:12.617529] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:31.894 [2024-11-20 10:44:12.617536] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:31.894 [2024-11-20 10:44:12.617543] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:31.894 [2024-11-20 10:44:12.617558] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:31.894 qpair failed and we were unable to recover it. 00:26:32.152 [2024-11-20 10:44:12.627435] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.152 [2024-11-20 10:44:12.627494] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.152 [2024-11-20 10:44:12.627508] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.152 [2024-11-20 10:44:12.627515] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.152 [2024-11-20 10:44:12.627521] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.152 [2024-11-20 10:44:12.627537] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.152 qpair failed and we were unable to recover it. 00:26:32.152 [2024-11-20 10:44:12.637497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.152 [2024-11-20 10:44:12.637557] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.152 [2024-11-20 10:44:12.637572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.152 [2024-11-20 10:44:12.637580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.152 [2024-11-20 10:44:12.637590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.152 [2024-11-20 10:44:12.637606] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.152 qpair failed and we were unable to recover it. 00:26:32.152 [2024-11-20 10:44:12.647531] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.152 [2024-11-20 10:44:12.647585] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.152 [2024-11-20 10:44:12.647599] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.152 [2024-11-20 10:44:12.647608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.152 [2024-11-20 10:44:12.647615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.152 [2024-11-20 10:44:12.647632] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.152 qpair failed and we were unable to recover it. 00:26:32.152 [2024-11-20 10:44:12.657527] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.152 [2024-11-20 10:44:12.657580] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.153 [2024-11-20 10:44:12.657595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.153 [2024-11-20 10:44:12.657602] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.153 [2024-11-20 10:44:12.657608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.153 [2024-11-20 10:44:12.657624] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.153 qpair failed and we were unable to recover it. 00:26:32.153 [2024-11-20 10:44:12.667539] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.153 [2024-11-20 10:44:12.667593] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.153 [2024-11-20 10:44:12.667608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.153 [2024-11-20 10:44:12.667615] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.153 [2024-11-20 10:44:12.667622] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.153 [2024-11-20 10:44:12.667639] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.153 qpair failed and we were unable to recover it. 00:26:32.153 [2024-11-20 10:44:12.677516] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.153 [2024-11-20 10:44:12.677577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.153 [2024-11-20 10:44:12.677591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.153 [2024-11-20 10:44:12.677599] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.153 [2024-11-20 10:44:12.677606] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.153 [2024-11-20 10:44:12.677621] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.153 qpair failed and we were unable to recover it. 00:26:32.153 [2024-11-20 10:44:12.687618] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.153 [2024-11-20 10:44:12.687703] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.153 [2024-11-20 10:44:12.687719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.153 [2024-11-20 10:44:12.687727] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.153 [2024-11-20 10:44:12.687733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.153 [2024-11-20 10:44:12.687748] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.153 qpair failed and we were unable to recover it. 00:26:32.153 [2024-11-20 10:44:12.697596] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.153 [2024-11-20 10:44:12.697658] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.153 [2024-11-20 10:44:12.697674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.153 [2024-11-20 10:44:12.697682] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.153 [2024-11-20 10:44:12.697688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.153 [2024-11-20 10:44:12.697703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.153 qpair failed and we were unable to recover it. 00:26:32.153 [2024-11-20 10:44:12.707642] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.153 [2024-11-20 10:44:12.707701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.153 [2024-11-20 10:44:12.707717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.153 [2024-11-20 10:44:12.707726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.153 [2024-11-20 10:44:12.707733] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.153 [2024-11-20 10:44:12.707749] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.153 qpair failed and we were unable to recover it. 00:26:32.153 [2024-11-20 10:44:12.717627] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.153 [2024-11-20 10:44:12.717683] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.153 [2024-11-20 10:44:12.717698] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.153 [2024-11-20 10:44:12.717705] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.153 [2024-11-20 10:44:12.717711] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.153 [2024-11-20 10:44:12.717726] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.153 qpair failed and we were unable to recover it. 00:26:32.153 [2024-11-20 10:44:12.727641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.153 [2024-11-20 10:44:12.727700] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.153 [2024-11-20 10:44:12.727717] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.153 [2024-11-20 10:44:12.727725] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.153 [2024-11-20 10:44:12.727731] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.153 [2024-11-20 10:44:12.727745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.153 qpair failed and we were unable to recover it. 00:26:32.153 [2024-11-20 10:44:12.737734] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.153 [2024-11-20 10:44:12.737788] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.153 [2024-11-20 10:44:12.737802] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.153 [2024-11-20 10:44:12.737809] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.153 [2024-11-20 10:44:12.737816] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.153 [2024-11-20 10:44:12.737831] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.153 qpair failed and we were unable to recover it. 00:26:32.153 [2024-11-20 10:44:12.747795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.153 [2024-11-20 10:44:12.747851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.153 [2024-11-20 10:44:12.747865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.153 [2024-11-20 10:44:12.747872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.153 [2024-11-20 10:44:12.747878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.153 [2024-11-20 10:44:12.747892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.153 qpair failed and we were unable to recover it. 00:26:32.153 [2024-11-20 10:44:12.757804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.153 [2024-11-20 10:44:12.757867] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.153 [2024-11-20 10:44:12.757881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.153 [2024-11-20 10:44:12.757888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.153 [2024-11-20 10:44:12.757894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.153 [2024-11-20 10:44:12.757910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.153 qpair failed and we were unable to recover it. 00:26:32.153 [2024-11-20 10:44:12.767805] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.153 [2024-11-20 10:44:12.767893] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.153 [2024-11-20 10:44:12.767908] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.153 [2024-11-20 10:44:12.767918] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.153 [2024-11-20 10:44:12.767924] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.153 [2024-11-20 10:44:12.767939] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.153 qpair failed and we were unable to recover it. 00:26:32.153 [2024-11-20 10:44:12.777845] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.153 [2024-11-20 10:44:12.777901] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.153 [2024-11-20 10:44:12.777917] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.153 [2024-11-20 10:44:12.777924] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.153 [2024-11-20 10:44:12.777930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.153 [2024-11-20 10:44:12.777946] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.153 qpair failed and we were unable to recover it. 00:26:32.154 [2024-11-20 10:44:12.787806] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.154 [2024-11-20 10:44:12.787863] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.154 [2024-11-20 10:44:12.787877] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.154 [2024-11-20 10:44:12.787885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.154 [2024-11-20 10:44:12.787892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.154 [2024-11-20 10:44:12.787907] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.154 qpair failed and we were unable to recover it. 00:26:32.154 [2024-11-20 10:44:12.797858] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.154 [2024-11-20 10:44:12.797956] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.154 [2024-11-20 10:44:12.797971] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.154 [2024-11-20 10:44:12.797978] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.154 [2024-11-20 10:44:12.797984] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.154 [2024-11-20 10:44:12.797999] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.154 qpair failed and we were unable to recover it. 00:26:32.154 [2024-11-20 10:44:12.807940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.154 [2024-11-20 10:44:12.807996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.154 [2024-11-20 10:44:12.808011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.154 [2024-11-20 10:44:12.808018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.154 [2024-11-20 10:44:12.808024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.154 [2024-11-20 10:44:12.808044] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.154 qpair failed and we were unable to recover it. 00:26:32.154 [2024-11-20 10:44:12.817975] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.154 [2024-11-20 10:44:12.818027] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.154 [2024-11-20 10:44:12.818041] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.154 [2024-11-20 10:44:12.818048] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.154 [2024-11-20 10:44:12.818055] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.154 [2024-11-20 10:44:12.818070] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.154 qpair failed and we were unable to recover it. 00:26:32.154 [2024-11-20 10:44:12.828016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.154 [2024-11-20 10:44:12.828069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.154 [2024-11-20 10:44:12.828083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.154 [2024-11-20 10:44:12.828091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.154 [2024-11-20 10:44:12.828097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.154 [2024-11-20 10:44:12.828112] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.154 qpair failed and we were unable to recover it. 00:26:32.154 [2024-11-20 10:44:12.838014] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.154 [2024-11-20 10:44:12.838069] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.154 [2024-11-20 10:44:12.838083] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.154 [2024-11-20 10:44:12.838090] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.154 [2024-11-20 10:44:12.838097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.154 [2024-11-20 10:44:12.838113] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.154 qpair failed and we were unable to recover it. 00:26:32.154 [2024-11-20 10:44:12.848059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.154 [2024-11-20 10:44:12.848122] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.154 [2024-11-20 10:44:12.848137] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.154 [2024-11-20 10:44:12.848144] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.154 [2024-11-20 10:44:12.848150] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.154 [2024-11-20 10:44:12.848166] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.154 qpair failed and we were unable to recover it. 00:26:32.154 [2024-11-20 10:44:12.858053] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.154 [2024-11-20 10:44:12.858113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.154 [2024-11-20 10:44:12.858127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.154 [2024-11-20 10:44:12.858135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.154 [2024-11-20 10:44:12.858142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.154 [2024-11-20 10:44:12.858157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.154 qpair failed and we were unable to recover it. 00:26:32.154 [2024-11-20 10:44:12.868092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.154 [2024-11-20 10:44:12.868147] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.154 [2024-11-20 10:44:12.868163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.154 [2024-11-20 10:44:12.868170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.154 [2024-11-20 10:44:12.868177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.154 [2024-11-20 10:44:12.868193] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.154 qpair failed and we were unable to recover it. 00:26:32.154 [2024-11-20 10:44:12.878138] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.154 [2024-11-20 10:44:12.878196] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.154 [2024-11-20 10:44:12.878215] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.154 [2024-11-20 10:44:12.878223] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.154 [2024-11-20 10:44:12.878229] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.154 [2024-11-20 10:44:12.878244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.154 qpair failed and we were unable to recover it. 00:26:32.412 [2024-11-20 10:44:12.888196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.412 [2024-11-20 10:44:12.888259] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.412 [2024-11-20 10:44:12.888273] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.412 [2024-11-20 10:44:12.888280] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.412 [2024-11-20 10:44:12.888286] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.412 [2024-11-20 10:44:12.888301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.412 qpair failed and we were unable to recover it. 00:26:32.412 [2024-11-20 10:44:12.898179] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.412 [2024-11-20 10:44:12.898242] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.412 [2024-11-20 10:44:12.898256] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.412 [2024-11-20 10:44:12.898267] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.412 [2024-11-20 10:44:12.898273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.412 [2024-11-20 10:44:12.898289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.412 qpair failed and we were unable to recover it. 00:26:32.412 [2024-11-20 10:44:12.908178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.412 [2024-11-20 10:44:12.908239] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.412 [2024-11-20 10:44:12.908255] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.412 [2024-11-20 10:44:12.908263] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.412 [2024-11-20 10:44:12.908270] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.412 [2024-11-20 10:44:12.908285] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.412 qpair failed and we were unable to recover it. 00:26:32.412 [2024-11-20 10:44:12.918268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.412 [2024-11-20 10:44:12.918344] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.412 [2024-11-20 10:44:12.918359] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.412 [2024-11-20 10:44:12.918366] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.412 [2024-11-20 10:44:12.918373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.412 [2024-11-20 10:44:12.918387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.412 qpair failed and we were unable to recover it. 00:26:32.412 [2024-11-20 10:44:12.928254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.412 [2024-11-20 10:44:12.928308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.412 [2024-11-20 10:44:12.928324] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.412 [2024-11-20 10:44:12.928332] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.412 [2024-11-20 10:44:12.928339] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.412 [2024-11-20 10:44:12.928355] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.412 qpair failed and we were unable to recover it. 00:26:32.412 [2024-11-20 10:44:12.938280] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.412 [2024-11-20 10:44:12.938338] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.412 [2024-11-20 10:44:12.938352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.412 [2024-11-20 10:44:12.938361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.412 [2024-11-20 10:44:12.938368] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.412 [2024-11-20 10:44:12.938387] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.412 qpair failed and we were unable to recover it. 00:26:32.412 [2024-11-20 10:44:12.948300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.412 [2024-11-20 10:44:12.948354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.412 [2024-11-20 10:44:12.948368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.412 [2024-11-20 10:44:12.948375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.412 [2024-11-20 10:44:12.948382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.412 [2024-11-20 10:44:12.948397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.412 qpair failed and we were unable to recover it. 00:26:32.412 [2024-11-20 10:44:12.958359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.412 [2024-11-20 10:44:12.958418] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.412 [2024-11-20 10:44:12.958431] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.412 [2024-11-20 10:44:12.958438] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.412 [2024-11-20 10:44:12.958444] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.412 [2024-11-20 10:44:12.958459] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.412 qpair failed and we were unable to recover it. 00:26:32.412 [2024-11-20 10:44:12.968289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.412 [2024-11-20 10:44:12.968351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.412 [2024-11-20 10:44:12.968367] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.412 [2024-11-20 10:44:12.968374] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.412 [2024-11-20 10:44:12.968381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.412 [2024-11-20 10:44:12.968396] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.412 qpair failed and we were unable to recover it. 00:26:32.413 [2024-11-20 10:44:12.978333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.413 [2024-11-20 10:44:12.978385] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.413 [2024-11-20 10:44:12.978399] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.413 [2024-11-20 10:44:12.978406] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.413 [2024-11-20 10:44:12.978412] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.413 [2024-11-20 10:44:12.978428] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.413 qpair failed and we were unable to recover it. 00:26:32.413 [2024-11-20 10:44:12.988381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.413 [2024-11-20 10:44:12.988458] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.413 [2024-11-20 10:44:12.988473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.413 [2024-11-20 10:44:12.988480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.413 [2024-11-20 10:44:12.988486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.413 [2024-11-20 10:44:12.988501] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.413 qpair failed and we were unable to recover it. 00:26:32.413 [2024-11-20 10:44:12.998385] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.413 [2024-11-20 10:44:12.998439] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.413 [2024-11-20 10:44:12.998453] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.413 [2024-11-20 10:44:12.998460] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.413 [2024-11-20 10:44:12.998466] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.413 [2024-11-20 10:44:12.998482] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.413 qpair failed and we were unable to recover it. 00:26:32.413 [2024-11-20 10:44:13.008532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.413 [2024-11-20 10:44:13.008591] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.413 [2024-11-20 10:44:13.008605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.413 [2024-11-20 10:44:13.008614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.413 [2024-11-20 10:44:13.008620] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.413 [2024-11-20 10:44:13.008635] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.413 qpair failed and we were unable to recover it. 00:26:32.413 [2024-11-20 10:44:13.018525] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.413 [2024-11-20 10:44:13.018576] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.413 [2024-11-20 10:44:13.018590] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.413 [2024-11-20 10:44:13.018598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.413 [2024-11-20 10:44:13.018604] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.413 [2024-11-20 10:44:13.018619] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.413 qpair failed and we were unable to recover it. 00:26:32.413 [2024-11-20 10:44:13.028453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.413 [2024-11-20 10:44:13.028530] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.413 [2024-11-20 10:44:13.028550] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.413 [2024-11-20 10:44:13.028558] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.413 [2024-11-20 10:44:13.028565] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.413 [2024-11-20 10:44:13.028580] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.413 qpair failed and we were unable to recover it. 00:26:32.413 [2024-11-20 10:44:13.038567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.413 [2024-11-20 10:44:13.038624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.413 [2024-11-20 10:44:13.038639] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.413 [2024-11-20 10:44:13.038646] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.413 [2024-11-20 10:44:13.038653] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.413 [2024-11-20 10:44:13.038669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.413 qpair failed and we were unable to recover it. 00:26:32.413 [2024-11-20 10:44:13.048583] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.413 [2024-11-20 10:44:13.048639] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.413 [2024-11-20 10:44:13.048653] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.413 [2024-11-20 10:44:13.048660] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.413 [2024-11-20 10:44:13.048666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.413 [2024-11-20 10:44:13.048682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.413 qpair failed and we were unable to recover it. 00:26:32.413 [2024-11-20 10:44:13.058576] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.413 [2024-11-20 10:44:13.058643] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.413 [2024-11-20 10:44:13.058657] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.413 [2024-11-20 10:44:13.058665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.413 [2024-11-20 10:44:13.058671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.413 [2024-11-20 10:44:13.058686] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.413 qpair failed and we were unable to recover it. 00:26:32.413 [2024-11-20 10:44:13.068599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.413 [2024-11-20 10:44:13.068652] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.413 [2024-11-20 10:44:13.068669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.413 [2024-11-20 10:44:13.068677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.413 [2024-11-20 10:44:13.068687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.413 [2024-11-20 10:44:13.068703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.413 qpair failed and we were unable to recover it. 00:26:32.413 [2024-11-20 10:44:13.078592] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.413 [2024-11-20 10:44:13.078646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.413 [2024-11-20 10:44:13.078660] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.413 [2024-11-20 10:44:13.078667] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.413 [2024-11-20 10:44:13.078673] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.413 [2024-11-20 10:44:13.078689] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.413 qpair failed and we were unable to recover it. 00:26:32.413 [2024-11-20 10:44:13.088768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.413 [2024-11-20 10:44:13.088849] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.413 [2024-11-20 10:44:13.088863] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.413 [2024-11-20 10:44:13.088871] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.413 [2024-11-20 10:44:13.088877] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.413 [2024-11-20 10:44:13.088892] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.413 qpair failed and we were unable to recover it. 00:26:32.413 [2024-11-20 10:44:13.098705] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.413 [2024-11-20 10:44:13.098758] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.413 [2024-11-20 10:44:13.098771] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.413 [2024-11-20 10:44:13.098778] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.413 [2024-11-20 10:44:13.098785] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.413 [2024-11-20 10:44:13.098800] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.413 qpair failed and we were unable to recover it. 00:26:32.413 [2024-11-20 10:44:13.108737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.414 [2024-11-20 10:44:13.108811] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.414 [2024-11-20 10:44:13.108826] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.414 [2024-11-20 10:44:13.108834] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.414 [2024-11-20 10:44:13.108840] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.414 [2024-11-20 10:44:13.108856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.414 qpair failed and we were unable to recover it. 00:26:32.414 [2024-11-20 10:44:13.118811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.414 [2024-11-20 10:44:13.118866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.414 [2024-11-20 10:44:13.118880] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.414 [2024-11-20 10:44:13.118887] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.414 [2024-11-20 10:44:13.118894] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.414 [2024-11-20 10:44:13.118909] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.414 qpair failed and we were unable to recover it. 00:26:32.414 [2024-11-20 10:44:13.128794] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.414 [2024-11-20 10:44:13.128851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.414 [2024-11-20 10:44:13.128865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.414 [2024-11-20 10:44:13.128873] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.414 [2024-11-20 10:44:13.128879] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.414 [2024-11-20 10:44:13.128895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.414 qpair failed and we were unable to recover it. 00:26:32.414 [2024-11-20 10:44:13.138874] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.672 [2024-11-20 10:44:13.138928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.672 [2024-11-20 10:44:13.138943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.672 [2024-11-20 10:44:13.138950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.672 [2024-11-20 10:44:13.138957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.672 [2024-11-20 10:44:13.138972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.672 qpair failed and we were unable to recover it. 00:26:32.672 [2024-11-20 10:44:13.148810] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.672 [2024-11-20 10:44:13.148872] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.672 [2024-11-20 10:44:13.148886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.672 [2024-11-20 10:44:13.148893] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.672 [2024-11-20 10:44:13.148900] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.672 [2024-11-20 10:44:13.148914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.672 qpair failed and we were unable to recover it. 00:26:32.672 [2024-11-20 10:44:13.158905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.672 [2024-11-20 10:44:13.158973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.672 [2024-11-20 10:44:13.158991] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.672 [2024-11-20 10:44:13.158998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.672 [2024-11-20 10:44:13.159005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.672 [2024-11-20 10:44:13.159020] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.672 qpair failed and we were unable to recover it. 00:26:32.672 [2024-11-20 10:44:13.168912] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.672 [2024-11-20 10:44:13.168974] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.672 [2024-11-20 10:44:13.168990] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.672 [2024-11-20 10:44:13.168998] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.672 [2024-11-20 10:44:13.169005] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.672 [2024-11-20 10:44:13.169022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.672 qpair failed and we were unable to recover it. 00:26:32.672 [2024-11-20 10:44:13.178970] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.672 [2024-11-20 10:44:13.179022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.672 [2024-11-20 10:44:13.179037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.672 [2024-11-20 10:44:13.179044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.672 [2024-11-20 10:44:13.179051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.672 [2024-11-20 10:44:13.179066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.672 qpair failed and we were unable to recover it. 00:26:32.672 [2024-11-20 10:44:13.188957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.672 [2024-11-20 10:44:13.189024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.672 [2024-11-20 10:44:13.189038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.672 [2024-11-20 10:44:13.189046] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.672 [2024-11-20 10:44:13.189052] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.672 [2024-11-20 10:44:13.189067] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.672 qpair failed and we were unable to recover it. 00:26:32.672 [2024-11-20 10:44:13.199038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.672 [2024-11-20 10:44:13.199094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.672 [2024-11-20 10:44:13.199107] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.672 [2024-11-20 10:44:13.199114] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.672 [2024-11-20 10:44:13.199124] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.672 [2024-11-20 10:44:13.199140] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.672 qpair failed and we were unable to recover it. 00:26:32.672 [2024-11-20 10:44:13.209029] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.672 [2024-11-20 10:44:13.209096] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.672 [2024-11-20 10:44:13.209109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.672 [2024-11-20 10:44:13.209117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.672 [2024-11-20 10:44:13.209123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.672 [2024-11-20 10:44:13.209138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.672 qpair failed and we were unable to recover it. 00:26:32.672 [2024-11-20 10:44:13.219079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.672 [2024-11-20 10:44:13.219130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.672 [2024-11-20 10:44:13.219144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.672 [2024-11-20 10:44:13.219151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.672 [2024-11-20 10:44:13.219157] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.672 [2024-11-20 10:44:13.219172] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.672 qpair failed and we were unable to recover it. 00:26:32.672 [2024-11-20 10:44:13.229120] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.673 [2024-11-20 10:44:13.229176] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.673 [2024-11-20 10:44:13.229190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.673 [2024-11-20 10:44:13.229198] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.673 [2024-11-20 10:44:13.229207] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.673 [2024-11-20 10:44:13.229223] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.673 qpair failed and we were unable to recover it. 00:26:32.673 [2024-11-20 10:44:13.239114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.673 [2024-11-20 10:44:13.239170] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.673 [2024-11-20 10:44:13.239186] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.673 [2024-11-20 10:44:13.239193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.673 [2024-11-20 10:44:13.239200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.673 [2024-11-20 10:44:13.239220] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.673 qpair failed and we were unable to recover it. 00:26:32.673 [2024-11-20 10:44:13.249194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.673 [2024-11-20 10:44:13.249308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.673 [2024-11-20 10:44:13.249323] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.673 [2024-11-20 10:44:13.249330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.673 [2024-11-20 10:44:13.249337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.673 [2024-11-20 10:44:13.249352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.673 qpair failed and we were unable to recover it. 00:26:32.673 [2024-11-20 10:44:13.259178] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.673 [2024-11-20 10:44:13.259237] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.673 [2024-11-20 10:44:13.259251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.673 [2024-11-20 10:44:13.259259] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.673 [2024-11-20 10:44:13.259265] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.673 [2024-11-20 10:44:13.259281] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.673 qpair failed and we were unable to recover it. 00:26:32.673 [2024-11-20 10:44:13.269194] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.673 [2024-11-20 10:44:13.269250] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.673 [2024-11-20 10:44:13.269265] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.673 [2024-11-20 10:44:13.269273] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.673 [2024-11-20 10:44:13.269279] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.673 [2024-11-20 10:44:13.269294] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.673 qpair failed and we were unable to recover it. 00:26:32.673 [2024-11-20 10:44:13.279245] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.673 [2024-11-20 10:44:13.279301] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.673 [2024-11-20 10:44:13.279315] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.673 [2024-11-20 10:44:13.279322] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.673 [2024-11-20 10:44:13.279329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.673 [2024-11-20 10:44:13.279345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.673 qpair failed and we were unable to recover it. 00:26:32.673 [2024-11-20 10:44:13.289257] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.673 [2024-11-20 10:44:13.289317] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.673 [2024-11-20 10:44:13.289332] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.673 [2024-11-20 10:44:13.289339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.673 [2024-11-20 10:44:13.289345] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.673 [2024-11-20 10:44:13.289360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.673 qpair failed and we were unable to recover it. 00:26:32.673 [2024-11-20 10:44:13.299304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.673 [2024-11-20 10:44:13.299355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.673 [2024-11-20 10:44:13.299371] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.673 [2024-11-20 10:44:13.299378] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.673 [2024-11-20 10:44:13.299385] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.673 [2024-11-20 10:44:13.299400] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.673 qpair failed and we were unable to recover it. 00:26:32.673 [2024-11-20 10:44:13.309342] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.673 [2024-11-20 10:44:13.309405] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.673 [2024-11-20 10:44:13.309419] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.673 [2024-11-20 10:44:13.309426] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.673 [2024-11-20 10:44:13.309432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.673 [2024-11-20 10:44:13.309448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.673 qpair failed and we were unable to recover it. 00:26:32.673 [2024-11-20 10:44:13.319344] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.673 [2024-11-20 10:44:13.319401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.673 [2024-11-20 10:44:13.319415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.673 [2024-11-20 10:44:13.319422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.673 [2024-11-20 10:44:13.319428] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.673 [2024-11-20 10:44:13.319444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.673 qpair failed and we were unable to recover it. 00:26:32.673 [2024-11-20 10:44:13.329386] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.673 [2024-11-20 10:44:13.329460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.673 [2024-11-20 10:44:13.329474] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.673 [2024-11-20 10:44:13.329485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.673 [2024-11-20 10:44:13.329491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.673 [2024-11-20 10:44:13.329506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.673 qpair failed and we were unable to recover it. 00:26:32.673 [2024-11-20 10:44:13.339404] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.673 [2024-11-20 10:44:13.339454] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.673 [2024-11-20 10:44:13.339470] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.673 [2024-11-20 10:44:13.339477] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.673 [2024-11-20 10:44:13.339484] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.673 [2024-11-20 10:44:13.339499] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.673 qpair failed and we were unable to recover it. 00:26:32.673 [2024-11-20 10:44:13.349361] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.673 [2024-11-20 10:44:13.349460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.673 [2024-11-20 10:44:13.349476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.673 [2024-11-20 10:44:13.349483] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.673 [2024-11-20 10:44:13.349490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.673 [2024-11-20 10:44:13.349506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.673 qpair failed and we were unable to recover it. 00:26:32.673 [2024-11-20 10:44:13.359462] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.673 [2024-11-20 10:44:13.359515] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.674 [2024-11-20 10:44:13.359532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.674 [2024-11-20 10:44:13.359540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.674 [2024-11-20 10:44:13.359547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.674 [2024-11-20 10:44:13.359562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.674 qpair failed and we were unable to recover it. 00:26:32.674 [2024-11-20 10:44:13.369497] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.674 [2024-11-20 10:44:13.369553] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.674 [2024-11-20 10:44:13.369570] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.674 [2024-11-20 10:44:13.369579] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.674 [2024-11-20 10:44:13.369586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.674 [2024-11-20 10:44:13.369607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.674 qpair failed and we were unable to recover it. 00:26:32.674 [2024-11-20 10:44:13.379536] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.674 [2024-11-20 10:44:13.379587] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.674 [2024-11-20 10:44:13.379603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.674 [2024-11-20 10:44:13.379610] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.674 [2024-11-20 10:44:13.379617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.674 [2024-11-20 10:44:13.379633] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.674 qpair failed and we were unable to recover it. 00:26:32.674 [2024-11-20 10:44:13.389572] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.674 [2024-11-20 10:44:13.389629] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.674 [2024-11-20 10:44:13.389644] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.674 [2024-11-20 10:44:13.389652] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.674 [2024-11-20 10:44:13.389658] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.674 [2024-11-20 10:44:13.389673] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.674 qpair failed and we were unable to recover it. 00:26:32.932 [2024-11-20 10:44:13.399579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.932 [2024-11-20 10:44:13.399633] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.932 [2024-11-20 10:44:13.399647] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.932 [2024-11-20 10:44:13.399654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.932 [2024-11-20 10:44:13.399660] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.932 [2024-11-20 10:44:13.399675] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.932 qpair failed and we were unable to recover it. 00:26:32.932 [2024-11-20 10:44:13.409631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.932 [2024-11-20 10:44:13.409697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.932 [2024-11-20 10:44:13.409712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.932 [2024-11-20 10:44:13.409719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.932 [2024-11-20 10:44:13.409725] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.932 [2024-11-20 10:44:13.409741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.932 qpair failed and we were unable to recover it. 00:26:32.932 [2024-11-20 10:44:13.419634] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.932 [2024-11-20 10:44:13.419691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.932 [2024-11-20 10:44:13.419708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.932 [2024-11-20 10:44:13.419716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.932 [2024-11-20 10:44:13.419723] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.932 [2024-11-20 10:44:13.419739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.932 qpair failed and we were unable to recover it. 00:26:32.932 [2024-11-20 10:44:13.429659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.932 [2024-11-20 10:44:13.429714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.932 [2024-11-20 10:44:13.429730] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.932 [2024-11-20 10:44:13.429738] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.932 [2024-11-20 10:44:13.429745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.932 [2024-11-20 10:44:13.429760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.932 qpair failed and we were unable to recover it. 00:26:32.932 [2024-11-20 10:44:13.439701] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.932 [2024-11-20 10:44:13.439760] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.932 [2024-11-20 10:44:13.439774] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.933 [2024-11-20 10:44:13.439781] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.933 [2024-11-20 10:44:13.439788] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.933 [2024-11-20 10:44:13.439802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.933 qpair failed and we were unable to recover it. 00:26:32.933 [2024-11-20 10:44:13.449728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.933 [2024-11-20 10:44:13.449784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.933 [2024-11-20 10:44:13.449798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.933 [2024-11-20 10:44:13.449805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.933 [2024-11-20 10:44:13.449812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.933 [2024-11-20 10:44:13.449827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.933 qpair failed and we were unable to recover it. 00:26:32.933 [2024-11-20 10:44:13.459749] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.933 [2024-11-20 10:44:13.459814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.933 [2024-11-20 10:44:13.459828] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.933 [2024-11-20 10:44:13.459838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.933 [2024-11-20 10:44:13.459844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.933 [2024-11-20 10:44:13.459860] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.933 qpair failed and we were unable to recover it. 00:26:32.933 [2024-11-20 10:44:13.469785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.933 [2024-11-20 10:44:13.469859] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.933 [2024-11-20 10:44:13.469874] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.933 [2024-11-20 10:44:13.469882] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.933 [2024-11-20 10:44:13.469888] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.933 [2024-11-20 10:44:13.469903] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.933 qpair failed and we were unable to recover it. 00:26:32.933 [2024-11-20 10:44:13.479857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.933 [2024-11-20 10:44:13.479928] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.933 [2024-11-20 10:44:13.479943] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.933 [2024-11-20 10:44:13.479950] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.933 [2024-11-20 10:44:13.479956] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.933 [2024-11-20 10:44:13.479972] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.933 qpair failed and we were unable to recover it. 00:26:32.933 [2024-11-20 10:44:13.489888] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.933 [2024-11-20 10:44:13.489952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.933 [2024-11-20 10:44:13.489966] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.933 [2024-11-20 10:44:13.489974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.933 [2024-11-20 10:44:13.489980] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.933 [2024-11-20 10:44:13.489995] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.933 qpair failed and we were unable to recover it. 00:26:32.933 [2024-11-20 10:44:13.499915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.933 [2024-11-20 10:44:13.499970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.933 [2024-11-20 10:44:13.499986] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.933 [2024-11-20 10:44:13.499995] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.933 [2024-11-20 10:44:13.500002] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.933 [2024-11-20 10:44:13.500021] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.933 qpair failed and we were unable to recover it. 00:26:32.933 [2024-11-20 10:44:13.509901] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.933 [2024-11-20 10:44:13.509953] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.933 [2024-11-20 10:44:13.509967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.933 [2024-11-20 10:44:13.509974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.933 [2024-11-20 10:44:13.509981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.933 [2024-11-20 10:44:13.509996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.933 qpair failed and we were unable to recover it. 00:26:32.933 [2024-11-20 10:44:13.519866] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.933 [2024-11-20 10:44:13.519919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.933 [2024-11-20 10:44:13.519933] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.933 [2024-11-20 10:44:13.519940] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.933 [2024-11-20 10:44:13.519946] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.933 [2024-11-20 10:44:13.519961] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.933 qpair failed and we were unable to recover it. 00:26:32.933 [2024-11-20 10:44:13.529953] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.933 [2024-11-20 10:44:13.530008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.933 [2024-11-20 10:44:13.530021] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.933 [2024-11-20 10:44:13.530028] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.933 [2024-11-20 10:44:13.530034] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.933 [2024-11-20 10:44:13.530049] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.933 qpair failed and we were unable to recover it. 00:26:32.933 [2024-11-20 10:44:13.539982] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.933 [2024-11-20 10:44:13.540031] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.933 [2024-11-20 10:44:13.540046] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.933 [2024-11-20 10:44:13.540053] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.933 [2024-11-20 10:44:13.540059] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.933 [2024-11-20 10:44:13.540074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.933 qpair failed and we were unable to recover it. 00:26:32.933 [2024-11-20 10:44:13.550006] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.933 [2024-11-20 10:44:13.550070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.933 [2024-11-20 10:44:13.550084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.933 [2024-11-20 10:44:13.550092] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.933 [2024-11-20 10:44:13.550098] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.933 [2024-11-20 10:44:13.550114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.933 qpair failed and we were unable to recover it. 00:26:32.933 [2024-11-20 10:44:13.560072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.933 [2024-11-20 10:44:13.560154] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.933 [2024-11-20 10:44:13.560168] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.933 [2024-11-20 10:44:13.560176] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.933 [2024-11-20 10:44:13.560182] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.933 [2024-11-20 10:44:13.560198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.933 qpair failed and we were unable to recover it. 00:26:32.933 [2024-11-20 10:44:13.570079] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.933 [2024-11-20 10:44:13.570133] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.933 [2024-11-20 10:44:13.570148] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.934 [2024-11-20 10:44:13.570155] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.934 [2024-11-20 10:44:13.570162] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.934 [2024-11-20 10:44:13.570177] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.934 qpair failed and we were unable to recover it. 00:26:32.934 [2024-11-20 10:44:13.580155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.934 [2024-11-20 10:44:13.580230] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.934 [2024-11-20 10:44:13.580245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.934 [2024-11-20 10:44:13.580252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.934 [2024-11-20 10:44:13.580258] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.934 [2024-11-20 10:44:13.580274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.934 qpair failed and we were unable to recover it. 00:26:32.934 [2024-11-20 10:44:13.590132] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.934 [2024-11-20 10:44:13.590185] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.934 [2024-11-20 10:44:13.590205] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.934 [2024-11-20 10:44:13.590213] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.934 [2024-11-20 10:44:13.590219] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.934 [2024-11-20 10:44:13.590235] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.934 qpair failed and we were unable to recover it. 00:26:32.934 [2024-11-20 10:44:13.600161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.934 [2024-11-20 10:44:13.600236] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.934 [2024-11-20 10:44:13.600251] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.934 [2024-11-20 10:44:13.600258] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.934 [2024-11-20 10:44:13.600264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.934 [2024-11-20 10:44:13.600279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.934 qpair failed and we were unable to recover it. 00:26:32.934 [2024-11-20 10:44:13.610189] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.934 [2024-11-20 10:44:13.610298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.934 [2024-11-20 10:44:13.610314] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.934 [2024-11-20 10:44:13.610321] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.934 [2024-11-20 10:44:13.610329] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.934 [2024-11-20 10:44:13.610344] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.934 qpair failed and we were unable to recover it. 00:26:32.934 [2024-11-20 10:44:13.620222] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.934 [2024-11-20 10:44:13.620276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.934 [2024-11-20 10:44:13.620291] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.934 [2024-11-20 10:44:13.620299] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.934 [2024-11-20 10:44:13.620305] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.934 [2024-11-20 10:44:13.620321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.934 qpair failed and we were unable to recover it. 00:26:32.934 [2024-11-20 10:44:13.630249] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.934 [2024-11-20 10:44:13.630302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.934 [2024-11-20 10:44:13.630316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.934 [2024-11-20 10:44:13.630323] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.934 [2024-11-20 10:44:13.630333] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.934 [2024-11-20 10:44:13.630349] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.934 qpair failed and we were unable to recover it. 00:26:32.934 [2024-11-20 10:44:13.640271] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.934 [2024-11-20 10:44:13.640329] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.934 [2024-11-20 10:44:13.640343] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.934 [2024-11-20 10:44:13.640351] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.934 [2024-11-20 10:44:13.640358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.934 [2024-11-20 10:44:13.640373] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.934 qpair failed and we were unable to recover it. 00:26:32.934 [2024-11-20 10:44:13.650300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:32.934 [2024-11-20 10:44:13.650383] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:32.934 [2024-11-20 10:44:13.650397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:32.934 [2024-11-20 10:44:13.650404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:32.934 [2024-11-20 10:44:13.650410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:32.934 [2024-11-20 10:44:13.650425] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:32.934 qpair failed and we were unable to recover it. 00:26:33.193 [2024-11-20 10:44:13.660333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.193 [2024-11-20 10:44:13.660387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.193 [2024-11-20 10:44:13.660401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.193 [2024-11-20 10:44:13.660408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.193 [2024-11-20 10:44:13.660414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.193 [2024-11-20 10:44:13.660429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.193 qpair failed and we were unable to recover it. 00:26:33.193 [2024-11-20 10:44:13.670415] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.193 [2024-11-20 10:44:13.670469] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.193 [2024-11-20 10:44:13.670485] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.193 [2024-11-20 10:44:13.670495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.193 [2024-11-20 10:44:13.670502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.193 [2024-11-20 10:44:13.670517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.193 qpair failed and we were unable to recover it. 00:26:33.193 [2024-11-20 10:44:13.680422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.193 [2024-11-20 10:44:13.680479] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.193 [2024-11-20 10:44:13.680494] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.193 [2024-11-20 10:44:13.680502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.193 [2024-11-20 10:44:13.680509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.193 [2024-11-20 10:44:13.680524] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.193 qpair failed and we were unable to recover it. 00:26:33.193 [2024-11-20 10:44:13.690395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.193 [2024-11-20 10:44:13.690461] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.193 [2024-11-20 10:44:13.690476] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.193 [2024-11-20 10:44:13.690484] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.193 [2024-11-20 10:44:13.690490] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.193 [2024-11-20 10:44:13.690505] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.193 qpair failed and we were unable to recover it. 00:26:33.193 [2024-11-20 10:44:13.700507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.193 [2024-11-20 10:44:13.700563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.193 [2024-11-20 10:44:13.700578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.193 [2024-11-20 10:44:13.700586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.193 [2024-11-20 10:44:13.700592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.193 [2024-11-20 10:44:13.700607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.193 qpair failed and we were unable to recover it. 00:26:33.193 [2024-11-20 10:44:13.710524] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.193 [2024-11-20 10:44:13.710575] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.193 [2024-11-20 10:44:13.710588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.193 [2024-11-20 10:44:13.710595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.193 [2024-11-20 10:44:13.710601] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.193 [2024-11-20 10:44:13.710617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.193 qpair failed and we were unable to recover it. 00:26:33.193 [2024-11-20 10:44:13.720518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.193 [2024-11-20 10:44:13.720574] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.193 [2024-11-20 10:44:13.720594] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.193 [2024-11-20 10:44:13.720601] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.193 [2024-11-20 10:44:13.720608] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.193 [2024-11-20 10:44:13.720623] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.193 qpair failed and we were unable to recover it. 00:26:33.193 [2024-11-20 10:44:13.730543] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.193 [2024-11-20 10:44:13.730601] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.193 [2024-11-20 10:44:13.730615] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.193 [2024-11-20 10:44:13.730622] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.193 [2024-11-20 10:44:13.730628] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.193 [2024-11-20 10:44:13.730643] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.193 qpair failed and we were unable to recover it. 00:26:33.193 [2024-11-20 10:44:13.740506] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.193 [2024-11-20 10:44:13.740558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.193 [2024-11-20 10:44:13.740572] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.193 [2024-11-20 10:44:13.740580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.193 [2024-11-20 10:44:13.740587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.193 [2024-11-20 10:44:13.740602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.193 qpair failed and we were unable to recover it. 00:26:33.193 [2024-11-20 10:44:13.750604] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.193 [2024-11-20 10:44:13.750675] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.193 [2024-11-20 10:44:13.750690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.193 [2024-11-20 10:44:13.750697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.193 [2024-11-20 10:44:13.750703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.193 [2024-11-20 10:44:13.750718] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.193 qpair failed and we were unable to recover it. 00:26:33.193 [2024-11-20 10:44:13.760631] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.193 [2024-11-20 10:44:13.760698] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.193 [2024-11-20 10:44:13.760712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.193 [2024-11-20 10:44:13.760720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.193 [2024-11-20 10:44:13.760729] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.193 [2024-11-20 10:44:13.760745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.194 qpair failed and we were unable to recover it. 00:26:33.194 [2024-11-20 10:44:13.770663] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.194 [2024-11-20 10:44:13.770717] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.194 [2024-11-20 10:44:13.770732] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.194 [2024-11-20 10:44:13.770739] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.194 [2024-11-20 10:44:13.770745] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.194 [2024-11-20 10:44:13.770760] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.194 qpair failed and we were unable to recover it. 00:26:33.194 [2024-11-20 10:44:13.780697] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.194 [2024-11-20 10:44:13.780753] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.194 [2024-11-20 10:44:13.780767] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.194 [2024-11-20 10:44:13.780775] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.194 [2024-11-20 10:44:13.780781] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.194 [2024-11-20 10:44:13.780796] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.194 qpair failed and we were unable to recover it. 00:26:33.194 [2024-11-20 10:44:13.790641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.194 [2024-11-20 10:44:13.790705] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.194 [2024-11-20 10:44:13.790719] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.194 [2024-11-20 10:44:13.790726] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.194 [2024-11-20 10:44:13.790732] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.194 [2024-11-20 10:44:13.790747] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.194 qpair failed and we were unable to recover it. 00:26:33.194 [2024-11-20 10:44:13.800737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.194 [2024-11-20 10:44:13.800790] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.194 [2024-11-20 10:44:13.800804] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.194 [2024-11-20 10:44:13.800811] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.194 [2024-11-20 10:44:13.800817] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.194 [2024-11-20 10:44:13.800833] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.194 qpair failed and we were unable to recover it. 00:26:33.194 [2024-11-20 10:44:13.810770] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.194 [2024-11-20 10:44:13.810822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.194 [2024-11-20 10:44:13.810836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.194 [2024-11-20 10:44:13.810844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.194 [2024-11-20 10:44:13.810850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.194 [2024-11-20 10:44:13.810865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.194 qpair failed and we were unable to recover it. 00:26:33.194 [2024-11-20 10:44:13.820803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.194 [2024-11-20 10:44:13.820881] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.194 [2024-11-20 10:44:13.820897] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.194 [2024-11-20 10:44:13.820904] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.194 [2024-11-20 10:44:13.820912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.194 [2024-11-20 10:44:13.820928] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.194 qpair failed and we were unable to recover it. 00:26:33.194 [2024-11-20 10:44:13.830817] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.194 [2024-11-20 10:44:13.830869] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.194 [2024-11-20 10:44:13.830883] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.194 [2024-11-20 10:44:13.830890] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.194 [2024-11-20 10:44:13.830897] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.194 [2024-11-20 10:44:13.830912] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.194 qpair failed and we were unable to recover it. 00:26:33.194 [2024-11-20 10:44:13.840854] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.194 [2024-11-20 10:44:13.840955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.194 [2024-11-20 10:44:13.840969] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.194 [2024-11-20 10:44:13.840976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.194 [2024-11-20 10:44:13.840982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.194 [2024-11-20 10:44:13.840998] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.194 qpair failed and we were unable to recover it. 00:26:33.194 [2024-11-20 10:44:13.850886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.194 [2024-11-20 10:44:13.850945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.194 [2024-11-20 10:44:13.850960] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.194 [2024-11-20 10:44:13.850967] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.194 [2024-11-20 10:44:13.850973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.194 [2024-11-20 10:44:13.850988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.194 qpair failed and we were unable to recover it. 00:26:33.194 [2024-11-20 10:44:13.860882] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.194 [2024-11-20 10:44:13.860938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.194 [2024-11-20 10:44:13.860953] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.194 [2024-11-20 10:44:13.860960] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.194 [2024-11-20 10:44:13.860967] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.194 [2024-11-20 10:44:13.860982] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.194 qpair failed and we were unable to recover it. 00:26:33.194 [2024-11-20 10:44:13.870966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.194 [2024-11-20 10:44:13.871035] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.194 [2024-11-20 10:44:13.871050] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.194 [2024-11-20 10:44:13.871057] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.194 [2024-11-20 10:44:13.871063] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.194 [2024-11-20 10:44:13.871078] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.194 qpair failed and we were unable to recover it. 00:26:33.194 [2024-11-20 10:44:13.880976] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.194 [2024-11-20 10:44:13.881079] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.194 [2024-11-20 10:44:13.881094] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.194 [2024-11-20 10:44:13.881101] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.194 [2024-11-20 10:44:13.881107] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.194 [2024-11-20 10:44:13.881123] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.194 qpair failed and we were unable to recover it. 00:26:33.194 [2024-11-20 10:44:13.890997] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.194 [2024-11-20 10:44:13.891054] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.194 [2024-11-20 10:44:13.891068] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.194 [2024-11-20 10:44:13.891079] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.194 [2024-11-20 10:44:13.891085] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.194 [2024-11-20 10:44:13.891101] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.194 qpair failed and we were unable to recover it. 00:26:33.195 [2024-11-20 10:44:13.901019] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.195 [2024-11-20 10:44:13.901073] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.195 [2024-11-20 10:44:13.901087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.195 [2024-11-20 10:44:13.901094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.195 [2024-11-20 10:44:13.901101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.195 [2024-11-20 10:44:13.901116] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.195 qpair failed and we were unable to recover it. 00:26:33.195 [2024-11-20 10:44:13.911049] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.195 [2024-11-20 10:44:13.911102] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.195 [2024-11-20 10:44:13.911116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.195 [2024-11-20 10:44:13.911123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.195 [2024-11-20 10:44:13.911130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.195 [2024-11-20 10:44:13.911146] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.195 qpair failed and we were unable to recover it. 00:26:33.454 [2024-11-20 10:44:13.921075] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.454 [2024-11-20 10:44:13.921151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.454 [2024-11-20 10:44:13.921166] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.454 [2024-11-20 10:44:13.921174] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.454 [2024-11-20 10:44:13.921180] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.454 [2024-11-20 10:44:13.921196] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.454 qpair failed and we were unable to recover it. 00:26:33.454 [2024-11-20 10:44:13.931151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.454 [2024-11-20 10:44:13.931254] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.454 [2024-11-20 10:44:13.931268] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.454 [2024-11-20 10:44:13.931275] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.454 [2024-11-20 10:44:13.931282] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.454 [2024-11-20 10:44:13.931301] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.454 qpair failed and we were unable to recover it. 00:26:33.454 [2024-11-20 10:44:13.941170] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.454 [2024-11-20 10:44:13.941271] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.454 [2024-11-20 10:44:13.941286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.454 [2024-11-20 10:44:13.941293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.454 [2024-11-20 10:44:13.941299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.454 [2024-11-20 10:44:13.941315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.454 qpair failed and we were unable to recover it. 00:26:33.454 [2024-11-20 10:44:13.951161] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.454 [2024-11-20 10:44:13.951218] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.454 [2024-11-20 10:44:13.951232] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.454 [2024-11-20 10:44:13.951240] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.454 [2024-11-20 10:44:13.951246] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.454 [2024-11-20 10:44:13.951261] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.454 qpair failed and we were unable to recover it. 00:26:33.454 [2024-11-20 10:44:13.961212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.454 [2024-11-20 10:44:13.961267] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.454 [2024-11-20 10:44:13.961281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.454 [2024-11-20 10:44:13.961288] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.454 [2024-11-20 10:44:13.961294] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.454 [2024-11-20 10:44:13.961309] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.454 qpair failed and we were unable to recover it. 00:26:33.454 [2024-11-20 10:44:13.971262] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.454 [2024-11-20 10:44:13.971321] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.454 [2024-11-20 10:44:13.971336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.454 [2024-11-20 10:44:13.971344] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.454 [2024-11-20 10:44:13.971350] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.454 [2024-11-20 10:44:13.971365] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.454 qpair failed and we were unable to recover it. 00:26:33.454 [2024-11-20 10:44:13.981263] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.454 [2024-11-20 10:44:13.981322] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.454 [2024-11-20 10:44:13.981336] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.454 [2024-11-20 10:44:13.981345] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.454 [2024-11-20 10:44:13.981351] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.454 [2024-11-20 10:44:13.981367] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.454 qpair failed and we were unable to recover it. 00:26:33.454 [2024-11-20 10:44:13.991279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.454 [2024-11-20 10:44:13.991478] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.454 [2024-11-20 10:44:13.991495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.454 [2024-11-20 10:44:13.991502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.454 [2024-11-20 10:44:13.991509] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.454 [2024-11-20 10:44:13.991526] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.454 qpair failed and we were unable to recover it. 00:26:33.454 [2024-11-20 10:44:14.001320] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.454 [2024-11-20 10:44:14.001379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.454 [2024-11-20 10:44:14.001394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.454 [2024-11-20 10:44:14.001401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.454 [2024-11-20 10:44:14.001407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.454 [2024-11-20 10:44:14.001423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.454 qpair failed and we were unable to recover it. 00:26:33.454 [2024-11-20 10:44:14.011405] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.454 [2024-11-20 10:44:14.011476] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.454 [2024-11-20 10:44:14.011491] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.454 [2024-11-20 10:44:14.011498] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.454 [2024-11-20 10:44:14.011504] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.454 [2024-11-20 10:44:14.011520] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.454 qpair failed and we were unable to recover it. 00:26:33.454 [2024-11-20 10:44:14.021370] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.454 [2024-11-20 10:44:14.021429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.454 [2024-11-20 10:44:14.021447] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.454 [2024-11-20 10:44:14.021455] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.454 [2024-11-20 10:44:14.021461] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.454 [2024-11-20 10:44:14.021476] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.454 qpair failed and we were unable to recover it. 00:26:33.454 [2024-11-20 10:44:14.031395] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.454 [2024-11-20 10:44:14.031448] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.454 [2024-11-20 10:44:14.031462] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.454 [2024-11-20 10:44:14.031470] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.454 [2024-11-20 10:44:14.031476] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.454 [2024-11-20 10:44:14.031491] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.454 qpair failed and we were unable to recover it. 00:26:33.454 [2024-11-20 10:44:14.041489] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.454 [2024-11-20 10:44:14.041545] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.454 [2024-11-20 10:44:14.041559] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.454 [2024-11-20 10:44:14.041566] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.455 [2024-11-20 10:44:14.041573] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.455 [2024-11-20 10:44:14.041587] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.455 qpair failed and we were unable to recover it. 00:26:33.455 [2024-11-20 10:44:14.051465] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.455 [2024-11-20 10:44:14.051519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.455 [2024-11-20 10:44:14.051532] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.455 [2024-11-20 10:44:14.051539] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.455 [2024-11-20 10:44:14.051545] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.455 [2024-11-20 10:44:14.051561] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.455 qpair failed and we were unable to recover it. 00:26:33.455 [2024-11-20 10:44:14.061526] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.455 [2024-11-20 10:44:14.061590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.455 [2024-11-20 10:44:14.061604] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.455 [2024-11-20 10:44:14.061611] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.455 [2024-11-20 10:44:14.061617] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.455 [2024-11-20 10:44:14.061636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.455 qpair failed and we were unable to recover it. 00:26:33.455 [2024-11-20 10:44:14.071541] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.455 [2024-11-20 10:44:14.071606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.455 [2024-11-20 10:44:14.071622] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.455 [2024-11-20 10:44:14.071630] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.455 [2024-11-20 10:44:14.071636] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.455 [2024-11-20 10:44:14.071653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.455 qpair failed and we were unable to recover it. 00:26:33.455 [2024-11-20 10:44:14.081549] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.455 [2024-11-20 10:44:14.081606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.455 [2024-11-20 10:44:14.081620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.455 [2024-11-20 10:44:14.081627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.455 [2024-11-20 10:44:14.081634] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.455 [2024-11-20 10:44:14.081650] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.455 qpair failed and we were unable to recover it. 00:26:33.455 [2024-11-20 10:44:14.091615] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.455 [2024-11-20 10:44:14.091680] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.455 [2024-11-20 10:44:14.091695] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.455 [2024-11-20 10:44:14.091702] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.455 [2024-11-20 10:44:14.091708] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.455 [2024-11-20 10:44:14.091724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.455 qpair failed and we were unable to recover it. 00:26:33.455 [2024-11-20 10:44:14.101598] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.455 [2024-11-20 10:44:14.101657] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.455 [2024-11-20 10:44:14.101672] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.455 [2024-11-20 10:44:14.101679] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.455 [2024-11-20 10:44:14.101686] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.455 [2024-11-20 10:44:14.101702] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.455 qpair failed and we were unable to recover it. 00:26:33.455 [2024-11-20 10:44:14.111671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.455 [2024-11-20 10:44:14.111729] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.455 [2024-11-20 10:44:14.111743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.455 [2024-11-20 10:44:14.111751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.455 [2024-11-20 10:44:14.111758] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.455 [2024-11-20 10:44:14.111775] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.455 qpair failed and we were unable to recover it. 00:26:33.455 [2024-11-20 10:44:14.121599] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.455 [2024-11-20 10:44:14.121659] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.455 [2024-11-20 10:44:14.121674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.455 [2024-11-20 10:44:14.121681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.455 [2024-11-20 10:44:14.121688] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.455 [2024-11-20 10:44:14.121704] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.455 qpair failed and we were unable to recover it. 00:26:33.455 [2024-11-20 10:44:14.131688] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.455 [2024-11-20 10:44:14.131743] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.455 [2024-11-20 10:44:14.131757] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.455 [2024-11-20 10:44:14.131764] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.455 [2024-11-20 10:44:14.131771] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.455 [2024-11-20 10:44:14.131786] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.455 qpair failed and we were unable to recover it. 00:26:33.455 [2024-11-20 10:44:14.141717] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.455 [2024-11-20 10:44:14.141771] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.455 [2024-11-20 10:44:14.141785] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.455 [2024-11-20 10:44:14.141792] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.455 [2024-11-20 10:44:14.141799] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.455 [2024-11-20 10:44:14.141814] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.455 qpair failed and we were unable to recover it. 00:26:33.455 [2024-11-20 10:44:14.151711] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.455 [2024-11-20 10:44:14.151762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.455 [2024-11-20 10:44:14.151780] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.455 [2024-11-20 10:44:14.151788] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.455 [2024-11-20 10:44:14.151794] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.455 [2024-11-20 10:44:14.151810] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.455 qpair failed and we were unable to recover it. 00:26:33.455 [2024-11-20 10:44:14.161768] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.455 [2024-11-20 10:44:14.161829] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.455 [2024-11-20 10:44:14.161845] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.455 [2024-11-20 10:44:14.161853] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.455 [2024-11-20 10:44:14.161860] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.455 [2024-11-20 10:44:14.161875] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.455 qpair failed and we were unable to recover it. 00:26:33.455 [2024-11-20 10:44:14.171772] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.455 [2024-11-20 10:44:14.171838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.455 [2024-11-20 10:44:14.171853] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.455 [2024-11-20 10:44:14.171860] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.456 [2024-11-20 10:44:14.171866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.456 [2024-11-20 10:44:14.171881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.456 qpair failed and we were unable to recover it. 00:26:33.715 [2024-11-20 10:44:14.181761] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.715 [2024-11-20 10:44:14.181814] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.715 [2024-11-20 10:44:14.181827] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.715 [2024-11-20 10:44:14.181835] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.715 [2024-11-20 10:44:14.181841] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.715 [2024-11-20 10:44:14.181856] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.715 qpair failed and we were unable to recover it. 00:26:33.715 [2024-11-20 10:44:14.191851] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.715 [2024-11-20 10:44:14.191906] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.715 [2024-11-20 10:44:14.191919] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.715 [2024-11-20 10:44:14.191926] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.715 [2024-11-20 10:44:14.191936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.715 [2024-11-20 10:44:14.191951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.715 qpair failed and we were unable to recover it. 00:26:33.715 [2024-11-20 10:44:14.201893] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.715 [2024-11-20 10:44:14.201950] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.715 [2024-11-20 10:44:14.201963] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.715 [2024-11-20 10:44:14.201970] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.715 [2024-11-20 10:44:14.201977] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.715 [2024-11-20 10:44:14.201992] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.715 qpair failed and we were unable to recover it. 00:26:33.715 [2024-11-20 10:44:14.211940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.715 [2024-11-20 10:44:14.211995] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.715 [2024-11-20 10:44:14.212009] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.715 [2024-11-20 10:44:14.212016] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.715 [2024-11-20 10:44:14.212022] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.715 [2024-11-20 10:44:14.212038] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.715 qpair failed and we were unable to recover it. 00:26:33.715 [2024-11-20 10:44:14.221870] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.715 [2024-11-20 10:44:14.221919] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.715 [2024-11-20 10:44:14.221934] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.715 [2024-11-20 10:44:14.221941] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.715 [2024-11-20 10:44:14.221948] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.715 [2024-11-20 10:44:14.221962] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.715 qpair failed and we were unable to recover it. 00:26:33.715 [2024-11-20 10:44:14.231973] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.715 [2024-11-20 10:44:14.232030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.715 [2024-11-20 10:44:14.232044] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.715 [2024-11-20 10:44:14.232052] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.715 [2024-11-20 10:44:14.232058] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.715 [2024-11-20 10:44:14.232074] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.715 qpair failed and we were unable to recover it. 00:26:33.715 [2024-11-20 10:44:14.242015] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.715 [2024-11-20 10:44:14.242094] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.716 [2024-11-20 10:44:14.242110] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.716 [2024-11-20 10:44:14.242117] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.716 [2024-11-20 10:44:14.242123] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.716 [2024-11-20 10:44:14.242138] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.716 qpair failed and we were unable to recover it. 00:26:33.716 [2024-11-20 10:44:14.252039] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.716 [2024-11-20 10:44:14.252098] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.716 [2024-11-20 10:44:14.252113] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.716 [2024-11-20 10:44:14.252120] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.716 [2024-11-20 10:44:14.252126] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.716 [2024-11-20 10:44:14.252141] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.716 qpair failed and we were unable to recover it. 00:26:33.716 [2024-11-20 10:44:14.262061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.716 [2024-11-20 10:44:14.262113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.716 [2024-11-20 10:44:14.262127] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.716 [2024-11-20 10:44:14.262134] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.716 [2024-11-20 10:44:14.262141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.716 [2024-11-20 10:44:14.262157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.716 qpair failed and we were unable to recover it. 00:26:33.716 [2024-11-20 10:44:14.272088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.716 [2024-11-20 10:44:14.272137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.716 [2024-11-20 10:44:14.272152] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.716 [2024-11-20 10:44:14.272160] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.716 [2024-11-20 10:44:14.272166] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.716 [2024-11-20 10:44:14.272181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.716 qpair failed and we were unable to recover it. 00:26:33.716 [2024-11-20 10:44:14.282165] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.716 [2024-11-20 10:44:14.282269] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.716 [2024-11-20 10:44:14.282287] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.716 [2024-11-20 10:44:14.282294] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.716 [2024-11-20 10:44:14.282301] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.716 [2024-11-20 10:44:14.282316] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.716 qpair failed and we were unable to recover it. 00:26:33.716 [2024-11-20 10:44:14.292152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.716 [2024-11-20 10:44:14.292212] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.716 [2024-11-20 10:44:14.292226] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.716 [2024-11-20 10:44:14.292233] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.716 [2024-11-20 10:44:14.292239] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.716 [2024-11-20 10:44:14.292254] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.716 qpair failed and we were unable to recover it. 00:26:33.716 [2024-11-20 10:44:14.302174] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.716 [2024-11-20 10:44:14.302231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.716 [2024-11-20 10:44:14.302245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.716 [2024-11-20 10:44:14.302252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.716 [2024-11-20 10:44:14.302259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.716 [2024-11-20 10:44:14.302274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.716 qpair failed and we were unable to recover it. 00:26:33.716 [2024-11-20 10:44:14.312208] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.716 [2024-11-20 10:44:14.312287] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.716 [2024-11-20 10:44:14.312303] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.716 [2024-11-20 10:44:14.312310] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.716 [2024-11-20 10:44:14.312316] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.716 [2024-11-20 10:44:14.312331] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.716 qpair failed and we were unable to recover it. 00:26:33.716 [2024-11-20 10:44:14.322260] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.716 [2024-11-20 10:44:14.322315] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.716 [2024-11-20 10:44:14.322329] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.716 [2024-11-20 10:44:14.322339] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.716 [2024-11-20 10:44:14.322346] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.716 [2024-11-20 10:44:14.322362] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.716 qpair failed and we were unable to recover it. 00:26:33.716 [2024-11-20 10:44:14.332250] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.716 [2024-11-20 10:44:14.332308] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.716 [2024-11-20 10:44:14.332322] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.716 [2024-11-20 10:44:14.332330] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.716 [2024-11-20 10:44:14.332337] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.716 [2024-11-20 10:44:14.332352] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.716 qpair failed and we were unable to recover it. 00:26:33.716 [2024-11-20 10:44:14.342346] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.716 [2024-11-20 10:44:14.342431] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.716 [2024-11-20 10:44:14.342448] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.716 [2024-11-20 10:44:14.342457] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.716 [2024-11-20 10:44:14.342464] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.716 [2024-11-20 10:44:14.342480] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.716 qpair failed and we were unable to recover it. 00:26:33.716 [2024-11-20 10:44:14.352284] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.716 [2024-11-20 10:44:14.352337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.716 [2024-11-20 10:44:14.352351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.716 [2024-11-20 10:44:14.352357] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.716 [2024-11-20 10:44:14.352364] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.716 [2024-11-20 10:44:14.352379] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.716 qpair failed and we were unable to recover it. 00:26:33.716 [2024-11-20 10:44:14.362330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.716 [2024-11-20 10:44:14.362386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.717 [2024-11-20 10:44:14.362400] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.717 [2024-11-20 10:44:14.362407] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.717 [2024-11-20 10:44:14.362413] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.717 [2024-11-20 10:44:14.362429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.717 qpair failed and we were unable to recover it. 00:26:33.717 [2024-11-20 10:44:14.372409] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.717 [2024-11-20 10:44:14.372460] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.717 [2024-11-20 10:44:14.372475] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.717 [2024-11-20 10:44:14.372482] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.717 [2024-11-20 10:44:14.372489] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.717 [2024-11-20 10:44:14.372506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.717 qpair failed and we were unable to recover it. 00:26:33.717 [2024-11-20 10:44:14.382432] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.717 [2024-11-20 10:44:14.382490] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.717 [2024-11-20 10:44:14.382504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.717 [2024-11-20 10:44:14.382512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.717 [2024-11-20 10:44:14.382519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.717 [2024-11-20 10:44:14.382535] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.717 qpair failed and we were unable to recover it. 00:26:33.717 [2024-11-20 10:44:14.392422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.717 [2024-11-20 10:44:14.392496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.717 [2024-11-20 10:44:14.392511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.717 [2024-11-20 10:44:14.392518] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.717 [2024-11-20 10:44:14.392525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.717 [2024-11-20 10:44:14.392540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.717 qpair failed and we were unable to recover it. 00:26:33.717 [2024-11-20 10:44:14.402455] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.717 [2024-11-20 10:44:14.402511] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.717 [2024-11-20 10:44:14.402524] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.717 [2024-11-20 10:44:14.402531] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.717 [2024-11-20 10:44:14.402538] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.717 [2024-11-20 10:44:14.402553] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.717 qpair failed and we were unable to recover it. 00:26:33.717 [2024-11-20 10:44:14.412419] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.717 [2024-11-20 10:44:14.412527] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.717 [2024-11-20 10:44:14.412541] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.717 [2024-11-20 10:44:14.412548] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.717 [2024-11-20 10:44:14.412554] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.717 [2024-11-20 10:44:14.412569] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.717 qpair failed and we were unable to recover it. 00:26:33.717 [2024-11-20 10:44:14.422575] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.717 [2024-11-20 10:44:14.422636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.717 [2024-11-20 10:44:14.422650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.717 [2024-11-20 10:44:14.422659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.717 [2024-11-20 10:44:14.422668] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.717 [2024-11-20 10:44:14.422683] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.717 qpair failed and we were unable to recover it. 00:26:33.717 [2024-11-20 10:44:14.432586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.717 [2024-11-20 10:44:14.432646] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.717 [2024-11-20 10:44:14.432661] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.717 [2024-11-20 10:44:14.432669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.717 [2024-11-20 10:44:14.432674] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.717 [2024-11-20 10:44:14.432690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.717 qpair failed and we were unable to recover it. 00:26:33.978 [2024-11-20 10:44:14.442582] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.978 [2024-11-20 10:44:14.442673] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.978 [2024-11-20 10:44:14.442689] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.978 [2024-11-20 10:44:14.442697] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.978 [2024-11-20 10:44:14.442703] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.978 [2024-11-20 10:44:14.442719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.978 qpair failed and we were unable to recover it. 00:26:33.978 [2024-11-20 10:44:14.452571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.978 [2024-11-20 10:44:14.452632] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.978 [2024-11-20 10:44:14.452646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.978 [2024-11-20 10:44:14.452659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.978 [2024-11-20 10:44:14.452666] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.978 [2024-11-20 10:44:14.452682] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.978 qpair failed and we were unable to recover it. 00:26:33.978 [2024-11-20 10:44:14.462590] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.978 [2024-11-20 10:44:14.462651] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.978 [2024-11-20 10:44:14.462664] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.978 [2024-11-20 10:44:14.462671] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.978 [2024-11-20 10:44:14.462678] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.978 [2024-11-20 10:44:14.462694] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.978 qpair failed and we were unable to recover it. 00:26:33.978 [2024-11-20 10:44:14.472679] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.978 [2024-11-20 10:44:14.472736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.978 [2024-11-20 10:44:14.472751] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.978 [2024-11-20 10:44:14.472758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.978 [2024-11-20 10:44:14.472764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.978 [2024-11-20 10:44:14.472779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.978 qpair failed and we were unable to recover it. 00:26:33.978 [2024-11-20 10:44:14.482632] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.978 [2024-11-20 10:44:14.482691] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.978 [2024-11-20 10:44:14.482705] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.978 [2024-11-20 10:44:14.482712] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.978 [2024-11-20 10:44:14.482719] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.978 [2024-11-20 10:44:14.482733] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.978 qpair failed and we were unable to recover it. 00:26:33.978 [2024-11-20 10:44:14.492699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.978 [2024-11-20 10:44:14.492768] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.978 [2024-11-20 10:44:14.492783] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.978 [2024-11-20 10:44:14.492790] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.978 [2024-11-20 10:44:14.492798] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.978 [2024-11-20 10:44:14.492818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.978 qpair failed and we were unable to recover it. 00:26:33.978 [2024-11-20 10:44:14.502737] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.978 [2024-11-20 10:44:14.502794] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.978 [2024-11-20 10:44:14.502808] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.978 [2024-11-20 10:44:14.502815] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.978 [2024-11-20 10:44:14.502822] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.978 [2024-11-20 10:44:14.502836] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.978 qpair failed and we were unable to recover it. 00:26:33.978 [2024-11-20 10:44:14.512793] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.978 [2024-11-20 10:44:14.512895] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.978 [2024-11-20 10:44:14.512909] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.978 [2024-11-20 10:44:14.512916] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.978 [2024-11-20 10:44:14.512923] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.978 [2024-11-20 10:44:14.512937] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.978 qpair failed and we were unable to recover it. 00:26:33.978 [2024-11-20 10:44:14.522886] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.978 [2024-11-20 10:44:14.522945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.978 [2024-11-20 10:44:14.522967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.978 [2024-11-20 10:44:14.522976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.978 [2024-11-20 10:44:14.522982] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.978 [2024-11-20 10:44:14.523001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.978 qpair failed and we were unable to recover it. 00:26:33.978 [2024-11-20 10:44:14.532839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.978 [2024-11-20 10:44:14.532929] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.978 [2024-11-20 10:44:14.532944] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.978 [2024-11-20 10:44:14.532952] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.978 [2024-11-20 10:44:14.532958] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.978 [2024-11-20 10:44:14.532973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.978 qpair failed and we were unable to recover it. 00:26:33.978 [2024-11-20 10:44:14.542855] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.978 [2024-11-20 10:44:14.542938] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.978 [2024-11-20 10:44:14.542952] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.978 [2024-11-20 10:44:14.542959] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.978 [2024-11-20 10:44:14.542965] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.978 [2024-11-20 10:44:14.542981] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.978 qpair failed and we were unable to recover it. 00:26:33.978 [2024-11-20 10:44:14.552879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.978 [2024-11-20 10:44:14.552935] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.978 [2024-11-20 10:44:14.552950] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.978 [2024-11-20 10:44:14.552957] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.978 [2024-11-20 10:44:14.552964] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.979 [2024-11-20 10:44:14.552980] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.979 qpair failed and we were unable to recover it. 00:26:33.979 [2024-11-20 10:44:14.562915] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.979 [2024-11-20 10:44:14.562968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.979 [2024-11-20 10:44:14.562981] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.979 [2024-11-20 10:44:14.562989] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.979 [2024-11-20 10:44:14.562995] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.979 [2024-11-20 10:44:14.563011] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.979 qpair failed and we were unable to recover it. 00:26:33.979 [2024-11-20 10:44:14.572998] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.979 [2024-11-20 10:44:14.573064] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.979 [2024-11-20 10:44:14.573080] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.979 [2024-11-20 10:44:14.573087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.979 [2024-11-20 10:44:14.573093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.979 [2024-11-20 10:44:14.573108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.979 qpair failed and we were unable to recover it. 00:26:33.979 [2024-11-20 10:44:14.582969] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.979 [2024-11-20 10:44:14.583021] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.979 [2024-11-20 10:44:14.583039] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.979 [2024-11-20 10:44:14.583047] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.979 [2024-11-20 10:44:14.583053] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.979 [2024-11-20 10:44:14.583069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.979 qpair failed and we were unable to recover it. 00:26:33.979 [2024-11-20 10:44:14.592992] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.979 [2024-11-20 10:44:14.593044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.979 [2024-11-20 10:44:14.593059] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.979 [2024-11-20 10:44:14.593066] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.979 [2024-11-20 10:44:14.593072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.979 [2024-11-20 10:44:14.593087] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.979 qpair failed and we were unable to recover it. 00:26:33.979 [2024-11-20 10:44:14.603036] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.979 [2024-11-20 10:44:14.603111] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.979 [2024-11-20 10:44:14.603125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.979 [2024-11-20 10:44:14.603132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.979 [2024-11-20 10:44:14.603138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.979 [2024-11-20 10:44:14.603154] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.979 qpair failed and we were unable to recover it. 00:26:33.979 [2024-11-20 10:44:14.613061] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.979 [2024-11-20 10:44:14.613135] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.979 [2024-11-20 10:44:14.613150] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.979 [2024-11-20 10:44:14.613157] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.979 [2024-11-20 10:44:14.613163] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.979 [2024-11-20 10:44:14.613178] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.979 qpair failed and we were unable to recover it. 00:26:33.979 [2024-11-20 10:44:14.623090] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.979 [2024-11-20 10:44:14.623145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.979 [2024-11-20 10:44:14.623160] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.979 [2024-11-20 10:44:14.623168] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.979 [2024-11-20 10:44:14.623174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.979 [2024-11-20 10:44:14.623192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.979 qpair failed and we were unable to recover it. 00:26:33.979 [2024-11-20 10:44:14.633122] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.979 [2024-11-20 10:44:14.633169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.979 [2024-11-20 10:44:14.633184] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.979 [2024-11-20 10:44:14.633191] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.979 [2024-11-20 10:44:14.633197] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.979 [2024-11-20 10:44:14.633217] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.979 qpair failed and we were unable to recover it. 00:26:33.979 [2024-11-20 10:44:14.643195] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.979 [2024-11-20 10:44:14.643255] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.979 [2024-11-20 10:44:14.643269] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.979 [2024-11-20 10:44:14.643276] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.979 [2024-11-20 10:44:14.643283] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.979 [2024-11-20 10:44:14.643298] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.979 qpair failed and we were unable to recover it. 00:26:33.979 [2024-11-20 10:44:14.653175] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.979 [2024-11-20 10:44:14.653231] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.979 [2024-11-20 10:44:14.653245] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.979 [2024-11-20 10:44:14.653252] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.979 [2024-11-20 10:44:14.653259] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.979 [2024-11-20 10:44:14.653274] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.979 qpair failed and we were unable to recover it. 00:26:33.979 [2024-11-20 10:44:14.663256] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.979 [2024-11-20 10:44:14.663313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.979 [2024-11-20 10:44:14.663327] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.979 [2024-11-20 10:44:14.663335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.979 [2024-11-20 10:44:14.663341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.979 [2024-11-20 10:44:14.663356] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.979 qpair failed and we were unable to recover it. 00:26:33.979 [2024-11-20 10:44:14.673231] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.979 [2024-11-20 10:44:14.673281] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.979 [2024-11-20 10:44:14.673296] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.979 [2024-11-20 10:44:14.673303] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.979 [2024-11-20 10:44:14.673310] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.979 [2024-11-20 10:44:14.673325] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.979 qpair failed and we were unable to recover it. 00:26:33.979 [2024-11-20 10:44:14.683269] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.979 [2024-11-20 10:44:14.683327] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.979 [2024-11-20 10:44:14.683341] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.980 [2024-11-20 10:44:14.683348] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.980 [2024-11-20 10:44:14.683354] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.980 [2024-11-20 10:44:14.683369] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.980 qpair failed and we were unable to recover it. 00:26:33.980 [2024-11-20 10:44:14.693368] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.980 [2024-11-20 10:44:14.693456] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.980 [2024-11-20 10:44:14.693473] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.980 [2024-11-20 10:44:14.693480] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.980 [2024-11-20 10:44:14.693486] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.980 [2024-11-20 10:44:14.693502] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.980 qpair failed and we were unable to recover it. 00:26:33.980 [2024-11-20 10:44:14.703347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:33.980 [2024-11-20 10:44:14.703404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:33.980 [2024-11-20 10:44:14.703418] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:33.980 [2024-11-20 10:44:14.703425] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:33.980 [2024-11-20 10:44:14.703432] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:33.980 [2024-11-20 10:44:14.703448] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:33.980 qpair failed and we were unable to recover it. 00:26:34.239 [2024-11-20 10:44:14.713381] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.239 [2024-11-20 10:44:14.713434] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.239 [2024-11-20 10:44:14.713454] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.239 [2024-11-20 10:44:14.713461] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.239 [2024-11-20 10:44:14.713467] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.239 [2024-11-20 10:44:14.713483] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.239 qpair failed and we were unable to recover it. 00:26:34.239 [2024-11-20 10:44:14.723436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.239 [2024-11-20 10:44:14.723496] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.240 [2024-11-20 10:44:14.723511] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.240 [2024-11-20 10:44:14.723519] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.240 [2024-11-20 10:44:14.723525] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.240 [2024-11-20 10:44:14.723540] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.240 qpair failed and we were unable to recover it. 00:26:34.240 [2024-11-20 10:44:14.733436] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.240 [2024-11-20 10:44:14.733502] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.240 [2024-11-20 10:44:14.733516] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.240 [2024-11-20 10:44:14.733524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.240 [2024-11-20 10:44:14.733530] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.240 [2024-11-20 10:44:14.733546] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.240 qpair failed and we were unable to recover it. 00:26:34.240 [2024-11-20 10:44:14.743463] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.240 [2024-11-20 10:44:14.743519] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.240 [2024-11-20 10:44:14.743533] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.240 [2024-11-20 10:44:14.743540] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.240 [2024-11-20 10:44:14.743547] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.240 [2024-11-20 10:44:14.743562] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.240 qpair failed and we were unable to recover it. 00:26:34.240 [2024-11-20 10:44:14.753509] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.240 [2024-11-20 10:44:14.753559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.240 [2024-11-20 10:44:14.753574] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.240 [2024-11-20 10:44:14.753581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.240 [2024-11-20 10:44:14.753591] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.240 [2024-11-20 10:44:14.753607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.240 qpair failed and we were unable to recover it. 00:26:34.240 [2024-11-20 10:44:14.763517] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.240 [2024-11-20 10:44:14.763606] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.240 [2024-11-20 10:44:14.763620] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.240 [2024-11-20 10:44:14.763627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.240 [2024-11-20 10:44:14.763633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.240 [2024-11-20 10:44:14.763649] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.240 qpair failed and we were unable to recover it. 00:26:34.240 [2024-11-20 10:44:14.773550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.240 [2024-11-20 10:44:14.773609] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.240 [2024-11-20 10:44:14.773624] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.240 [2024-11-20 10:44:14.773632] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.240 [2024-11-20 10:44:14.773638] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.240 [2024-11-20 10:44:14.773653] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.240 qpair failed and we were unable to recover it. 00:26:34.240 [2024-11-20 10:44:14.783566] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.240 [2024-11-20 10:44:14.783644] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.240 [2024-11-20 10:44:14.783658] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.240 [2024-11-20 10:44:14.783665] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.240 [2024-11-20 10:44:14.783671] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.240 [2024-11-20 10:44:14.783687] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.240 qpair failed and we were unable to recover it. 00:26:34.240 [2024-11-20 10:44:14.793569] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.240 [2024-11-20 10:44:14.793624] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.240 [2024-11-20 10:44:14.793638] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.240 [2024-11-20 10:44:14.793645] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.240 [2024-11-20 10:44:14.793652] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.240 [2024-11-20 10:44:14.793669] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.240 qpair failed and we were unable to recover it. 00:26:34.240 [2024-11-20 10:44:14.803655] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.240 [2024-11-20 10:44:14.803714] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.240 [2024-11-20 10:44:14.803727] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.240 [2024-11-20 10:44:14.803734] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.240 [2024-11-20 10:44:14.803741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.240 [2024-11-20 10:44:14.803756] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.240 qpair failed and we were unable to recover it. 00:26:34.240 [2024-11-20 10:44:14.813661] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.240 [2024-11-20 10:44:14.813712] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.240 [2024-11-20 10:44:14.813726] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.240 [2024-11-20 10:44:14.813732] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.240 [2024-11-20 10:44:14.813739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.240 [2024-11-20 10:44:14.813755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.240 qpair failed and we were unable to recover it. 00:26:34.240 [2024-11-20 10:44:14.823676] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.240 [2024-11-20 10:44:14.823734] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.240 [2024-11-20 10:44:14.823748] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.240 [2024-11-20 10:44:14.823756] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.240 [2024-11-20 10:44:14.823762] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.240 [2024-11-20 10:44:14.823777] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.240 qpair failed and we were unable to recover it. 00:26:34.240 [2024-11-20 10:44:14.833695] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.240 [2024-11-20 10:44:14.833750] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.240 [2024-11-20 10:44:14.833765] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.240 [2024-11-20 10:44:14.833772] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.240 [2024-11-20 10:44:14.833779] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.240 [2024-11-20 10:44:14.833795] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.240 qpair failed and we were unable to recover it. 00:26:34.240 [2024-11-20 10:44:14.843791] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.240 [2024-11-20 10:44:14.843847] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.240 [2024-11-20 10:44:14.843864] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.240 [2024-11-20 10:44:14.843872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.240 [2024-11-20 10:44:14.843878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.240 [2024-11-20 10:44:14.843893] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.240 qpair failed and we were unable to recover it. 00:26:34.241 [2024-11-20 10:44:14.853762] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.241 [2024-11-20 10:44:14.853846] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.241 [2024-11-20 10:44:14.853860] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.241 [2024-11-20 10:44:14.853867] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.241 [2024-11-20 10:44:14.853873] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.241 [2024-11-20 10:44:14.853888] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.241 qpair failed and we were unable to recover it. 00:26:34.241 [2024-11-20 10:44:14.863787] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.241 [2024-11-20 10:44:14.863871] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.241 [2024-11-20 10:44:14.863885] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.241 [2024-11-20 10:44:14.863892] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.241 [2024-11-20 10:44:14.863899] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.241 [2024-11-20 10:44:14.863914] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.241 qpair failed and we were unable to recover it. 00:26:34.241 [2024-11-20 10:44:14.873811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.241 [2024-11-20 10:44:14.873866] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.241 [2024-11-20 10:44:14.873881] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.241 [2024-11-20 10:44:14.873888] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.241 [2024-11-20 10:44:14.873895] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.241 [2024-11-20 10:44:14.873910] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.241 qpair failed and we were unable to recover it. 00:26:34.241 [2024-11-20 10:44:14.883857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.241 [2024-11-20 10:44:14.883913] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.241 [2024-11-20 10:44:14.883928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.241 [2024-11-20 10:44:14.883938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.241 [2024-11-20 10:44:14.883944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.241 [2024-11-20 10:44:14.883959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.241 qpair failed and we were unable to recover it. 00:26:34.241 [2024-11-20 10:44:14.893867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.241 [2024-11-20 10:44:14.893924] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.241 [2024-11-20 10:44:14.893937] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.241 [2024-11-20 10:44:14.893945] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.241 [2024-11-20 10:44:14.893951] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.241 [2024-11-20 10:44:14.893967] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.241 qpair failed and we were unable to recover it. 00:26:34.241 [2024-11-20 10:44:14.903905] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.241 [2024-11-20 10:44:14.903980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.241 [2024-11-20 10:44:14.903994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.241 [2024-11-20 10:44:14.904002] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.241 [2024-11-20 10:44:14.904008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.241 [2024-11-20 10:44:14.904023] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.241 qpair failed and we were unable to recover it. 00:26:34.241 [2024-11-20 10:44:14.913987] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.241 [2024-11-20 10:44:14.914072] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.241 [2024-11-20 10:44:14.914087] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.241 [2024-11-20 10:44:14.914094] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.241 [2024-11-20 10:44:14.914101] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.241 [2024-11-20 10:44:14.914115] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.241 qpair failed and we were unable to recover it. 00:26:34.241 [2024-11-20 10:44:14.923966] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.241 [2024-11-20 10:44:14.924023] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.241 [2024-11-20 10:44:14.924037] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.241 [2024-11-20 10:44:14.924044] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.241 [2024-11-20 10:44:14.924051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.241 [2024-11-20 10:44:14.924066] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.241 qpair failed and we were unable to recover it. 00:26:34.241 [2024-11-20 10:44:14.934026] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.241 [2024-11-20 10:44:14.934083] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.241 [2024-11-20 10:44:14.934097] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.241 [2024-11-20 10:44:14.934105] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.241 [2024-11-20 10:44:14.934112] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.241 [2024-11-20 10:44:14.934127] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.241 qpair failed and we were unable to recover it. 00:26:34.241 [2024-11-20 10:44:14.944031] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.241 [2024-11-20 10:44:14.944115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.241 [2024-11-20 10:44:14.944130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.241 [2024-11-20 10:44:14.944137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.241 [2024-11-20 10:44:14.944143] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.241 [2024-11-20 10:44:14.944158] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.241 qpair failed and we were unable to recover it. 00:26:34.241 [2024-11-20 10:44:14.954091] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.241 [2024-11-20 10:44:14.954158] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.241 [2024-11-20 10:44:14.954172] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.241 [2024-11-20 10:44:14.954179] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.241 [2024-11-20 10:44:14.954185] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.241 [2024-11-20 10:44:14.954204] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.241 qpair failed and we were unable to recover it. 00:26:34.241 [2024-11-20 10:44:14.964074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.241 [2024-11-20 10:44:14.964169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.241 [2024-11-20 10:44:14.964185] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.241 [2024-11-20 10:44:14.964193] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.241 [2024-11-20 10:44:14.964200] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.242 [2024-11-20 10:44:14.964229] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.242 qpair failed and we were unable to recover it. 00:26:34.501 [2024-11-20 10:44:14.974103] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.501 [2024-11-20 10:44:14.974163] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.501 [2024-11-20 10:44:14.974177] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.501 [2024-11-20 10:44:14.974184] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:14.974191] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:14.974211] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:14.984136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.502 [2024-11-20 10:44:14.984189] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.502 [2024-11-20 10:44:14.984208] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.502 [2024-11-20 10:44:14.984215] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:14.984221] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:14.984237] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:14.994182] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.502 [2024-11-20 10:44:14.994240] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.502 [2024-11-20 10:44:14.994257] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.502 [2024-11-20 10:44:14.994264] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:14.994271] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:14.994287] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:15.004131] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.502 [2024-11-20 10:44:15.004229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.502 [2024-11-20 10:44:15.004242] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.502 [2024-11-20 10:44:15.004249] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:15.004255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:15.004271] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:15.014293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.502 [2024-11-20 10:44:15.014350] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.502 [2024-11-20 10:44:15.014364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.502 [2024-11-20 10:44:15.014375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:15.014381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:15.014397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:15.024177] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.502 [2024-11-20 10:44:15.024234] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.502 [2024-11-20 10:44:15.024249] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.502 [2024-11-20 10:44:15.024257] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:15.024264] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:15.024279] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:15.034307] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.502 [2024-11-20 10:44:15.034361] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.502 [2024-11-20 10:44:15.034375] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.502 [2024-11-20 10:44:15.034382] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:15.034389] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:15.034404] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:15.044301] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.502 [2024-11-20 10:44:15.044363] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.502 [2024-11-20 10:44:15.044377] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.502 [2024-11-20 10:44:15.044384] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:15.044391] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:15.044407] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:15.054328] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.502 [2024-11-20 10:44:15.054384] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.502 [2024-11-20 10:44:15.054398] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.502 [2024-11-20 10:44:15.054404] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:15.054411] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:15.054429] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:15.064286] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.502 [2024-11-20 10:44:15.064346] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.502 [2024-11-20 10:44:15.064360] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.502 [2024-11-20 10:44:15.064368] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:15.064375] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:15.064390] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:15.074390] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.502 [2024-11-20 10:44:15.074453] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.502 [2024-11-20 10:44:15.074468] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.502 [2024-11-20 10:44:15.074475] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:15.074482] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:15.074497] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:15.084454] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.502 [2024-11-20 10:44:15.084513] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.502 [2024-11-20 10:44:15.084527] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.502 [2024-11-20 10:44:15.084534] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:15.084541] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:15.084556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:15.094453] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.502 [2024-11-20 10:44:15.094504] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.502 [2024-11-20 10:44:15.094517] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.502 [2024-11-20 10:44:15.094524] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:15.094531] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:15.094547] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:15.104507] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.502 [2024-11-20 10:44:15.104561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.502 [2024-11-20 10:44:15.104575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.502 [2024-11-20 10:44:15.104584] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:15.104590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:15.104605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:15.114508] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.502 [2024-11-20 10:44:15.114560] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.502 [2024-11-20 10:44:15.114573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.502 [2024-11-20 10:44:15.114580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:15.114586] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:15.114601] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:15.124586] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.502 [2024-11-20 10:44:15.124654] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.502 [2024-11-20 10:44:15.124669] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.502 [2024-11-20 10:44:15.124677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:15.124683] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:15.124698] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:15.134585] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.502 [2024-11-20 10:44:15.134637] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.502 [2024-11-20 10:44:15.134650] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.502 [2024-11-20 10:44:15.134658] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:15.134664] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:15.134678] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:15.144521] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.502 [2024-11-20 10:44:15.144578] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.502 [2024-11-20 10:44:15.144595] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.502 [2024-11-20 10:44:15.144603] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:15.144609] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:15.144625] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:15.154647] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.502 [2024-11-20 10:44:15.154710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.502 [2024-11-20 10:44:15.154724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.502 [2024-11-20 10:44:15.154731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:15.154738] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:15.154753] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:15.164659] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.502 [2024-11-20 10:44:15.164715] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.502 [2024-11-20 10:44:15.164729] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.502 [2024-11-20 10:44:15.164735] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.502 [2024-11-20 10:44:15.164742] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.502 [2024-11-20 10:44:15.164758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.502 qpair failed and we were unable to recover it. 00:26:34.502 [2024-11-20 10:44:15.174683] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.503 [2024-11-20 10:44:15.174748] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.503 [2024-11-20 10:44:15.174763] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.503 [2024-11-20 10:44:15.174771] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.503 [2024-11-20 10:44:15.174777] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.503 [2024-11-20 10:44:15.174792] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.503 qpair failed and we were unable to recover it. 00:26:34.503 [2024-11-20 10:44:15.184736] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.503 [2024-11-20 10:44:15.184802] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.503 [2024-11-20 10:44:15.184816] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.503 [2024-11-20 10:44:15.184824] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.503 [2024-11-20 10:44:15.184833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.503 [2024-11-20 10:44:15.184849] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.503 qpair failed and we were unable to recover it. 00:26:34.503 [2024-11-20 10:44:15.194735] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.503 [2024-11-20 10:44:15.194825] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.503 [2024-11-20 10:44:15.194840] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.503 [2024-11-20 10:44:15.194847] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.503 [2024-11-20 10:44:15.194853] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.503 [2024-11-20 10:44:15.194868] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.503 qpair failed and we were unable to recover it. 00:26:34.503 [2024-11-20 10:44:15.204813] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.503 [2024-11-20 10:44:15.204917] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.503 [2024-11-20 10:44:15.204931] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.503 [2024-11-20 10:44:15.204938] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.503 [2024-11-20 10:44:15.204944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.503 [2024-11-20 10:44:15.204960] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.503 qpair failed and we were unable to recover it. 00:26:34.503 [2024-11-20 10:44:15.214798] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.503 [2024-11-20 10:44:15.214890] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.503 [2024-11-20 10:44:15.214905] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.503 [2024-11-20 10:44:15.214911] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.503 [2024-11-20 10:44:15.214918] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.503 [2024-11-20 10:44:15.214933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.503 qpair failed and we were unable to recover it. 00:26:34.503 [2024-11-20 10:44:15.224865] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.503 [2024-11-20 10:44:15.224964] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.503 [2024-11-20 10:44:15.224978] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.503 [2024-11-20 10:44:15.224985] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.503 [2024-11-20 10:44:15.224992] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.503 [2024-11-20 10:44:15.225007] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.503 qpair failed and we were unable to recover it. 00:26:34.762 [2024-11-20 10:44:15.234848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.762 [2024-11-20 10:44:15.234907] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.762 [2024-11-20 10:44:15.234921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.762 [2024-11-20 10:44:15.234929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.762 [2024-11-20 10:44:15.234936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.762 [2024-11-20 10:44:15.234952] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.762 qpair failed and we were unable to recover it. 00:26:34.762 [2024-11-20 10:44:15.244819] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.762 [2024-11-20 10:44:15.244873] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.762 [2024-11-20 10:44:15.244889] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.762 [2024-11-20 10:44:15.244896] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.762 [2024-11-20 10:44:15.244903] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.762 [2024-11-20 10:44:15.244919] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.762 qpair failed and we were unable to recover it. 00:26:34.762 [2024-11-20 10:44:15.254914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.762 [2024-11-20 10:44:15.254970] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.762 [2024-11-20 10:44:15.254984] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.762 [2024-11-20 10:44:15.254991] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.762 [2024-11-20 10:44:15.254998] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.762 [2024-11-20 10:44:15.255013] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.762 qpair failed and we were unable to recover it. 00:26:34.762 [2024-11-20 10:44:15.264948] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.762 [2024-11-20 10:44:15.265013] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.762 [2024-11-20 10:44:15.265028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.762 [2024-11-20 10:44:15.265036] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.762 [2024-11-20 10:44:15.265042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.762 [2024-11-20 10:44:15.265057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.762 qpair failed and we were unable to recover it. 00:26:34.762 [2024-11-20 10:44:15.274957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.762 [2024-11-20 10:44:15.275008] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.762 [2024-11-20 10:44:15.275026] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.762 [2024-11-20 10:44:15.275033] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.762 [2024-11-20 10:44:15.275039] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.762 [2024-11-20 10:44:15.275054] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.762 qpair failed and we were unable to recover it. 00:26:34.762 [2024-11-20 10:44:15.284999] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.762 [2024-11-20 10:44:15.285065] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.762 [2024-11-20 10:44:15.285079] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.762 [2024-11-20 10:44:15.285087] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.762 [2024-11-20 10:44:15.285093] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.762 [2024-11-20 10:44:15.285108] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.762 qpair failed and we were unable to recover it. 00:26:34.762 [2024-11-20 10:44:15.295017] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.762 [2024-11-20 10:44:15.295110] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.295125] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.295132] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.295138] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.295153] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:34.763 [2024-11-20 10:44:15.305046] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.763 [2024-11-20 10:44:15.305099] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.305116] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.305123] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.305130] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.305145] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:34.763 [2024-11-20 10:44:15.315086] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.763 [2024-11-20 10:44:15.315137] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.315151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.315158] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.315174] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.315189] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:34.763 [2024-11-20 10:44:15.325115] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.763 [2024-11-20 10:44:15.325175] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.325190] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.325197] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.325208] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.325224] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:34.763 [2024-11-20 10:44:15.335148] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.763 [2024-11-20 10:44:15.335219] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.335234] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.335241] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.335247] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.335262] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:34.763 [2024-11-20 10:44:15.345169] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.763 [2024-11-20 10:44:15.345226] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.345241] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.345248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.345254] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.345269] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:34.763 [2024-11-20 10:44:15.355199] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.763 [2024-11-20 10:44:15.355263] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.355276] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.355284] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.355290] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.355306] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:34.763 [2024-11-20 10:44:15.365261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.763 [2024-11-20 10:44:15.365332] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.365347] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.365355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.365361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.365376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:34.763 [2024-11-20 10:44:15.375287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.763 [2024-11-20 10:44:15.375337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.375353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.375360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.375366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.375382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:34.763 [2024-11-20 10:44:15.385279] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.763 [2024-11-20 10:44:15.385333] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.385348] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.385355] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.385361] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.385376] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:34.763 [2024-11-20 10:44:15.395316] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.763 [2024-11-20 10:44:15.395369] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.395382] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.395389] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.395396] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.395411] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:34.763 [2024-11-20 10:44:15.405360] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.763 [2024-11-20 10:44:15.405420] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.405439] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.405447] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.405454] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.405470] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:34.763 [2024-11-20 10:44:15.415421] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.763 [2024-11-20 10:44:15.415524] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.415538] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.415544] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.415551] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.415567] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:34.763 [2024-11-20 10:44:15.425387] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.763 [2024-11-20 10:44:15.425449] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.425464] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.425471] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.425478] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.425493] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:34.763 [2024-11-20 10:44:15.435433] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.763 [2024-11-20 10:44:15.435498] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.435512] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.435520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.435527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.435542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:34.763 [2024-11-20 10:44:15.445476] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.763 [2024-11-20 10:44:15.445533] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.445546] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.445556] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.445563] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.445578] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:34.763 [2024-11-20 10:44:15.455480] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.763 [2024-11-20 10:44:15.455531] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.455544] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.455551] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.455557] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.455572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:34.763 [2024-11-20 10:44:15.465522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.763 [2024-11-20 10:44:15.465605] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.465619] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.465627] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.465633] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.465648] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:34.763 [2024-11-20 10:44:15.475472] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.763 [2024-11-20 10:44:15.475558] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.475573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.475580] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.475587] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.475602] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:34.763 [2024-11-20 10:44:15.485600] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:34.763 [2024-11-20 10:44:15.485655] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:34.763 [2024-11-20 10:44:15.485670] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:34.763 [2024-11-20 10:44:15.485677] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:34.763 [2024-11-20 10:44:15.485684] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:34.763 [2024-11-20 10:44:15.485699] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:34.763 qpair failed and we were unable to recover it. 00:26:35.022 [2024-11-20 10:44:15.495567] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.022 [2024-11-20 10:44:15.495660] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.022 [2024-11-20 10:44:15.495674] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.022 [2024-11-20 10:44:15.495681] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.022 [2024-11-20 10:44:15.495687] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.022 [2024-11-20 10:44:15.495703] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.022 qpair failed and we were unable to recover it. 00:26:35.022 [2024-11-20 10:44:15.505625] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.022 [2024-11-20 10:44:15.505693] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.022 [2024-11-20 10:44:15.505708] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.022 [2024-11-20 10:44:15.505716] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.022 [2024-11-20 10:44:15.505722] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.022 [2024-11-20 10:44:15.505739] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.022 qpair failed and we were unable to recover it. 00:26:35.022 [2024-11-20 10:44:15.515557] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.022 [2024-11-20 10:44:15.515621] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.022 [2024-11-20 10:44:15.515636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.022 [2024-11-20 10:44:15.515643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.022 [2024-11-20 10:44:15.515650] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.022 [2024-11-20 10:44:15.515665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.022 qpair failed and we were unable to recover it. 00:26:35.022 [2024-11-20 10:44:15.525671] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.022 [2024-11-20 10:44:15.525730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.022 [2024-11-20 10:44:15.525745] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.022 [2024-11-20 10:44:15.525755] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.022 [2024-11-20 10:44:15.525763] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.022 [2024-11-20 10:44:15.525780] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.022 qpair failed and we were unable to recover it. 00:26:35.022 [2024-11-20 10:44:15.535686] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.022 [2024-11-20 10:44:15.535745] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.022 [2024-11-20 10:44:15.535759] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.023 [2024-11-20 10:44:15.535766] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.023 [2024-11-20 10:44:15.535773] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.023 [2024-11-20 10:44:15.535787] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.023 qpair failed and we were unable to recover it. 00:26:35.023 [2024-11-20 10:44:15.545672] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.023 [2024-11-20 10:44:15.545725] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.023 [2024-11-20 10:44:15.545739] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.023 [2024-11-20 10:44:15.545746] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.023 [2024-11-20 10:44:15.545753] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.023 [2024-11-20 10:44:15.545769] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.023 qpair failed and we were unable to recover it. 00:26:35.023 [2024-11-20 10:44:15.555690] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.023 [2024-11-20 10:44:15.555778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.023 [2024-11-20 10:44:15.555792] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.023 [2024-11-20 10:44:15.555799] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.023 [2024-11-20 10:44:15.555805] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.023 [2024-11-20 10:44:15.555820] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.023 qpair failed and we were unable to recover it. 00:26:35.023 [2024-11-20 10:44:15.565839] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.023 [2024-11-20 10:44:15.565899] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.023 [2024-11-20 10:44:15.565913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.023 [2024-11-20 10:44:15.565921] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.023 [2024-11-20 10:44:15.565928] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.023 [2024-11-20 10:44:15.565943] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.023 qpair failed and we were unable to recover it. 00:26:35.023 [2024-11-20 10:44:15.575842] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.023 [2024-11-20 10:44:15.575925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.023 [2024-11-20 10:44:15.575940] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.023 [2024-11-20 10:44:15.575951] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.023 [2024-11-20 10:44:15.575957] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.023 [2024-11-20 10:44:15.575973] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.023 qpair failed and we were unable to recover it. 00:26:35.023 [2024-11-20 10:44:15.585879] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.023 [2024-11-20 10:44:15.585945] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.023 [2024-11-20 10:44:15.585959] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.023 [2024-11-20 10:44:15.585966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.023 [2024-11-20 10:44:15.585973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.023 [2024-11-20 10:44:15.585987] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.023 qpair failed and we were unable to recover it. 00:26:35.023 [2024-11-20 10:44:15.595804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.023 [2024-11-20 10:44:15.595857] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.023 [2024-11-20 10:44:15.595871] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.023 [2024-11-20 10:44:15.595878] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.023 [2024-11-20 10:44:15.595885] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.023 [2024-11-20 10:44:15.595900] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.023 qpair failed and we were unable to recover it. 00:26:35.023 [2024-11-20 10:44:15.605838] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.023 [2024-11-20 10:44:15.605916] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.023 [2024-11-20 10:44:15.605930] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.023 [2024-11-20 10:44:15.605937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.023 [2024-11-20 10:44:15.605944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.023 [2024-11-20 10:44:15.605959] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.023 qpair failed and we were unable to recover it. 00:26:35.023 [2024-11-20 10:44:15.615956] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.023 [2024-11-20 10:44:15.616024] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.023 [2024-11-20 10:44:15.616038] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.023 [2024-11-20 10:44:15.616045] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.023 [2024-11-20 10:44:15.616051] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.023 [2024-11-20 10:44:15.616069] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.023 qpair failed and we were unable to recover it. 00:26:35.023 [2024-11-20 10:44:15.625930] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.023 [2024-11-20 10:44:15.625978] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.023 [2024-11-20 10:44:15.625993] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.023 [2024-11-20 10:44:15.626000] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.023 [2024-11-20 10:44:15.626006] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.023 [2024-11-20 10:44:15.626022] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.023 qpair failed and we were unable to recover it. 00:26:35.023 [2024-11-20 10:44:15.635960] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.023 [2024-11-20 10:44:15.636014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.023 [2024-11-20 10:44:15.636028] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.023 [2024-11-20 10:44:15.636035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.023 [2024-11-20 10:44:15.636042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.023 [2024-11-20 10:44:15.636056] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.023 qpair failed and we were unable to recover it. 00:26:35.023 [2024-11-20 10:44:15.646025] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.023 [2024-11-20 10:44:15.646082] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.023 [2024-11-20 10:44:15.646096] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.023 [2024-11-20 10:44:15.646103] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.023 [2024-11-20 10:44:15.646110] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.023 [2024-11-20 10:44:15.646126] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.023 qpair failed and we were unable to recover it. 00:26:35.023 [2024-11-20 10:44:15.656059] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.023 [2024-11-20 10:44:15.656115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.023 [2024-11-20 10:44:15.656129] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.023 [2024-11-20 10:44:15.656135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.023 [2024-11-20 10:44:15.656142] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.023 [2024-11-20 10:44:15.656157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.023 qpair failed and we were unable to recover it. 00:26:35.023 [2024-11-20 10:44:15.666077] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.023 [2024-11-20 10:44:15.666143] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.023 [2024-11-20 10:44:15.666157] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.023 [2024-11-20 10:44:15.666165] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.024 [2024-11-20 10:44:15.666171] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.024 [2024-11-20 10:44:15.666187] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.024 qpair failed and we were unable to recover it. 00:26:35.024 [2024-11-20 10:44:15.676081] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.024 [2024-11-20 10:44:15.676172] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.024 [2024-11-20 10:44:15.676187] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.024 [2024-11-20 10:44:15.676194] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.024 [2024-11-20 10:44:15.676205] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.024 [2024-11-20 10:44:15.676221] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.024 qpair failed and we were unable to recover it. 00:26:35.024 [2024-11-20 10:44:15.686152] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.024 [2024-11-20 10:44:15.686232] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.024 [2024-11-20 10:44:15.686248] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.024 [2024-11-20 10:44:15.686255] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.024 [2024-11-20 10:44:15.686262] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.024 [2024-11-20 10:44:15.686277] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.024 qpair failed and we were unable to recover it. 00:26:35.024 [2024-11-20 10:44:15.696106] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.024 [2024-11-20 10:44:15.696165] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.024 [2024-11-20 10:44:15.696180] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.024 [2024-11-20 10:44:15.696188] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.024 [2024-11-20 10:44:15.696195] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.024 [2024-11-20 10:44:15.696215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.024 qpair failed and we were unable to recover it. 00:26:35.024 [2024-11-20 10:44:15.706119] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.024 [2024-11-20 10:44:15.706174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.024 [2024-11-20 10:44:15.706191] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.024 [2024-11-20 10:44:15.706199] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.024 [2024-11-20 10:44:15.706210] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.024 [2024-11-20 10:44:15.706226] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.024 qpair failed and we were unable to recover it. 00:26:35.024 [2024-11-20 10:44:15.716218] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.024 [2024-11-20 10:44:15.716316] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.024 [2024-11-20 10:44:15.716331] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.024 [2024-11-20 10:44:15.716338] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.024 [2024-11-20 10:44:15.716344] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.024 [2024-11-20 10:44:15.716360] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.024 qpair failed and we were unable to recover it. 00:26:35.024 [2024-11-20 10:44:15.726180] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.024 [2024-11-20 10:44:15.726243] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.024 [2024-11-20 10:44:15.726258] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.024 [2024-11-20 10:44:15.726266] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.024 [2024-11-20 10:44:15.726273] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.024 [2024-11-20 10:44:15.726289] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.024 qpair failed and we were unable to recover it. 00:26:35.024 [2024-11-20 10:44:15.736207] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.024 [2024-11-20 10:44:15.736266] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.024 [2024-11-20 10:44:15.736281] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.024 [2024-11-20 10:44:15.736289] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.024 [2024-11-20 10:44:15.736297] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.024 [2024-11-20 10:44:15.736314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.024 qpair failed and we were unable to recover it. 00:26:35.024 [2024-11-20 10:44:15.746278] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.024 [2024-11-20 10:44:15.746339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.024 [2024-11-20 10:44:15.746353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.024 [2024-11-20 10:44:15.746361] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.024 [2024-11-20 10:44:15.746373] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.024 [2024-11-20 10:44:15.746388] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.024 qpair failed and we were unable to recover it. 00:26:35.285 [2024-11-20 10:44:15.756251] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.285 [2024-11-20 10:44:15.756303] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.285 [2024-11-20 10:44:15.756317] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.285 [2024-11-20 10:44:15.756324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.285 [2024-11-20 10:44:15.756330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.285 [2024-11-20 10:44:15.756346] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.285 qpair failed and we were unable to recover it. 00:26:35.285 [2024-11-20 10:44:15.766329] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.285 [2024-11-20 10:44:15.766387] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.285 [2024-11-20 10:44:15.766401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.285 [2024-11-20 10:44:15.766408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.285 [2024-11-20 10:44:15.766414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.285 [2024-11-20 10:44:15.766430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.285 qpair failed and we were unable to recover it. 00:26:35.285 [2024-11-20 10:44:15.776394] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.285 [2024-11-20 10:44:15.776446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.285 [2024-11-20 10:44:15.776461] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.285 [2024-11-20 10:44:15.776469] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.285 [2024-11-20 10:44:15.776475] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.285 [2024-11-20 10:44:15.776490] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.285 qpair failed and we were unable to recover it. 00:26:35.285 [2024-11-20 10:44:15.786347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.285 [2024-11-20 10:44:15.786402] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.285 [2024-11-20 10:44:15.786416] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.285 [2024-11-20 10:44:15.786423] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.285 [2024-11-20 10:44:15.786429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.285 [2024-11-20 10:44:15.786443] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.285 qpair failed and we were unable to recover it. 00:26:35.285 [2024-11-20 10:44:15.796422] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.285 [2024-11-20 10:44:15.796481] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.285 [2024-11-20 10:44:15.796495] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.285 [2024-11-20 10:44:15.796502] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.285 [2024-11-20 10:44:15.796508] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.285 [2024-11-20 10:44:15.796523] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.285 qpair failed and we were unable to recover it. 00:26:35.285 [2024-11-20 10:44:15.806451] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.285 [2024-11-20 10:44:15.806544] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.285 [2024-11-20 10:44:15.806558] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.285 [2024-11-20 10:44:15.806565] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.285 [2024-11-20 10:44:15.806571] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.285 [2024-11-20 10:44:15.806586] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.285 qpair failed and we were unable to recover it. 00:26:35.285 [2024-11-20 10:44:15.816522] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.286 [2024-11-20 10:44:15.816577] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.286 [2024-11-20 10:44:15.816591] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.286 [2024-11-20 10:44:15.816598] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.286 [2024-11-20 10:44:15.816605] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.286 [2024-11-20 10:44:15.816620] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.286 qpair failed and we were unable to recover it. 00:26:35.286 [2024-11-20 10:44:15.826486] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.286 [2024-11-20 10:44:15.826586] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.286 [2024-11-20 10:44:15.826600] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.286 [2024-11-20 10:44:15.826607] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.286 [2024-11-20 10:44:15.826613] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.286 [2024-11-20 10:44:15.826628] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.286 qpair failed and we were unable to recover it. 00:26:35.286 [2024-11-20 10:44:15.836550] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.286 [2024-11-20 10:44:15.836645] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.286 [2024-11-20 10:44:15.836662] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.286 [2024-11-20 10:44:15.836669] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.286 [2024-11-20 10:44:15.836675] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.286 [2024-11-20 10:44:15.836690] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.286 qpair failed and we were unable to recover it. 00:26:35.286 [2024-11-20 10:44:15.846503] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.286 [2024-11-20 10:44:15.846559] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.286 [2024-11-20 10:44:15.846573] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.286 [2024-11-20 10:44:15.846581] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.286 [2024-11-20 10:44:15.846588] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.286 [2024-11-20 10:44:15.846604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.286 qpair failed and we were unable to recover it. 00:26:35.286 [2024-11-20 10:44:15.856529] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.286 [2024-11-20 10:44:15.856594] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.286 [2024-11-20 10:44:15.856608] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.286 [2024-11-20 10:44:15.856614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.286 [2024-11-20 10:44:15.856621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.286 [2024-11-20 10:44:15.856637] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.286 qpair failed and we were unable to recover it. 00:26:35.286 [2024-11-20 10:44:15.866641] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.286 [2024-11-20 10:44:15.866697] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.286 [2024-11-20 10:44:15.866712] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.286 [2024-11-20 10:44:15.866719] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.286 [2024-11-20 10:44:15.866726] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.286 [2024-11-20 10:44:15.866742] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.286 qpair failed and we were unable to recover it. 00:26:35.286 [2024-11-20 10:44:15.876614] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.286 [2024-11-20 10:44:15.876707] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.286 [2024-11-20 10:44:15.876724] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.286 [2024-11-20 10:44:15.876731] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.286 [2024-11-20 10:44:15.876741] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.286 [2024-11-20 10:44:15.876758] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.286 qpair failed and we were unable to recover it. 00:26:35.286 [2024-11-20 10:44:15.886699] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.286 [2024-11-20 10:44:15.886759] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.286 [2024-11-20 10:44:15.886773] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.286 [2024-11-20 10:44:15.886780] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.286 [2024-11-20 10:44:15.886787] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.286 [2024-11-20 10:44:15.886802] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.286 qpair failed and we were unable to recover it. 00:26:35.286 [2024-11-20 10:44:15.896781] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.286 [2024-11-20 10:44:15.896854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.286 [2024-11-20 10:44:15.896868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.286 [2024-11-20 10:44:15.896875] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.286 [2024-11-20 10:44:15.896882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.286 [2024-11-20 10:44:15.896897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.286 qpair failed and we were unable to recover it. 00:26:35.286 [2024-11-20 10:44:15.906719] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.286 [2024-11-20 10:44:15.906778] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.287 [2024-11-20 10:44:15.906791] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.287 [2024-11-20 10:44:15.906798] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.287 [2024-11-20 10:44:15.906804] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.287 [2024-11-20 10:44:15.906819] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.287 qpair failed and we were unable to recover it. 00:26:35.287 [2024-11-20 10:44:15.916784] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.287 [2024-11-20 10:44:15.916838] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.287 [2024-11-20 10:44:15.916852] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.287 [2024-11-20 10:44:15.916859] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.287 [2024-11-20 10:44:15.916866] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.287 [2024-11-20 10:44:15.916881] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.287 qpair failed and we were unable to recover it. 00:26:35.287 [2024-11-20 10:44:15.926745] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.287 [2024-11-20 10:44:15.926803] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.287 [2024-11-20 10:44:15.926818] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.287 [2024-11-20 10:44:15.926825] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.287 [2024-11-20 10:44:15.926832] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.287 [2024-11-20 10:44:15.926848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.287 qpair failed and we were unable to recover it. 00:26:35.287 [2024-11-20 10:44:15.936837] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.287 [2024-11-20 10:44:15.936892] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.287 [2024-11-20 10:44:15.936906] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.287 [2024-11-20 10:44:15.936913] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.287 [2024-11-20 10:44:15.936920] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.287 [2024-11-20 10:44:15.936936] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.287 qpair failed and we were unable to recover it. 00:26:35.287 [2024-11-20 10:44:15.946857] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.287 [2024-11-20 10:44:15.946912] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.287 [2024-11-20 10:44:15.946926] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.287 [2024-11-20 10:44:15.946933] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.287 [2024-11-20 10:44:15.946940] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.287 [2024-11-20 10:44:15.946956] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.287 qpair failed and we were unable to recover it. 00:26:35.287 [2024-11-20 10:44:15.956934] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.287 [2024-11-20 10:44:15.956990] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.287 [2024-11-20 10:44:15.957004] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.287 [2024-11-20 10:44:15.957011] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.287 [2024-11-20 10:44:15.957018] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.287 [2024-11-20 10:44:15.957033] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.287 qpair failed and we were unable to recover it. 00:26:35.287 [2024-11-20 10:44:15.966914] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.287 [2024-11-20 10:44:15.966968] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.287 [2024-11-20 10:44:15.966985] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.287 [2024-11-20 10:44:15.966992] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.287 [2024-11-20 10:44:15.967000] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.287 [2024-11-20 10:44:15.967015] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.287 qpair failed and we were unable to recover it. 00:26:35.287 [2024-11-20 10:44:15.976959] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.287 [2024-11-20 10:44:15.977020] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.287 [2024-11-20 10:44:15.977034] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.287 [2024-11-20 10:44:15.977042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.287 [2024-11-20 10:44:15.977048] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.287 [2024-11-20 10:44:15.977064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.287 qpair failed and we were unable to recover it. 00:26:35.287 [2024-11-20 10:44:15.986971] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.287 [2024-11-20 10:44:15.987022] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.287 [2024-11-20 10:44:15.987036] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.287 [2024-11-20 10:44:15.987042] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.287 [2024-11-20 10:44:15.987049] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.287 [2024-11-20 10:44:15.987064] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.287 qpair failed and we were unable to recover it. 00:26:35.287 [2024-11-20 10:44:15.997001] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.287 [2024-11-20 10:44:15.997052] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.287 [2024-11-20 10:44:15.997067] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.287 [2024-11-20 10:44:15.997073] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.287 [2024-11-20 10:44:15.997080] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.288 [2024-11-20 10:44:15.997095] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.288 qpair failed and we were unable to recover it. 00:26:35.288 [2024-11-20 10:44:16.007030] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.288 [2024-11-20 10:44:16.007087] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.288 [2024-11-20 10:44:16.007100] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.288 [2024-11-20 10:44:16.007110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.288 [2024-11-20 10:44:16.007117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.288 [2024-11-20 10:44:16.007131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.288 qpair failed and we were unable to recover it. 00:26:35.551 [2024-11-20 10:44:16.017088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.551 [2024-11-20 10:44:16.017141] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.551 [2024-11-20 10:44:16.017154] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.551 [2024-11-20 10:44:16.017162] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.551 [2024-11-20 10:44:16.017168] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.551 [2024-11-20 10:44:16.017183] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.551 qpair failed and we were unable to recover it. 00:26:35.551 [2024-11-20 10:44:16.027136] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.551 [2024-11-20 10:44:16.027192] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.551 [2024-11-20 10:44:16.027211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.551 [2024-11-20 10:44:16.027218] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.551 [2024-11-20 10:44:16.027224] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.551 [2024-11-20 10:44:16.027240] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.551 qpair failed and we were unable to recover it. 00:26:35.551 [2024-11-20 10:44:16.037130] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.551 [2024-11-20 10:44:16.037194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.551 [2024-11-20 10:44:16.037211] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.551 [2024-11-20 10:44:16.037219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.551 [2024-11-20 10:44:16.037226] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.551 [2024-11-20 10:44:16.037241] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.551 qpair failed and we were unable to recover it. 00:26:35.551 [2024-11-20 10:44:16.047155] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.551 [2024-11-20 10:44:16.047221] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.551 [2024-11-20 10:44:16.047235] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.551 [2024-11-20 10:44:16.047243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.551 [2024-11-20 10:44:16.047249] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.551 [2024-11-20 10:44:16.047265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.551 qpair failed and we were unable to recover it. 00:26:35.551 [2024-11-20 10:44:16.057166] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.551 [2024-11-20 10:44:16.057229] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.551 [2024-11-20 10:44:16.057244] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.551 [2024-11-20 10:44:16.057251] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.551 [2024-11-20 10:44:16.057257] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.551 [2024-11-20 10:44:16.057273] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.551 qpair failed and we were unable to recover it. 00:26:35.551 [2024-11-20 10:44:16.067188] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.551 [2024-11-20 10:44:16.067249] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.551 [2024-11-20 10:44:16.067264] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.551 [2024-11-20 10:44:16.067271] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.551 [2024-11-20 10:44:16.067278] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.551 [2024-11-20 10:44:16.067293] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.551 qpair failed and we were unable to recover it. 00:26:35.551 [2024-11-20 10:44:16.077268] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.551 [2024-11-20 10:44:16.077324] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.551 [2024-11-20 10:44:16.077339] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.551 [2024-11-20 10:44:16.077346] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.551 [2024-11-20 10:44:16.077353] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.551 [2024-11-20 10:44:16.077368] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.551 qpair failed and we were unable to recover it. 00:26:35.551 [2024-11-20 10:44:16.087304] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.551 [2024-11-20 10:44:16.087409] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.551 [2024-11-20 10:44:16.087424] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.551 [2024-11-20 10:44:16.087432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.551 [2024-11-20 10:44:16.087439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.551 [2024-11-20 10:44:16.087454] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.551 qpair failed and we were unable to recover it. 00:26:35.551 [2024-11-20 10:44:16.097281] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.551 [2024-11-20 10:44:16.097336] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.551 [2024-11-20 10:44:16.097352] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.551 [2024-11-20 10:44:16.097360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.551 [2024-11-20 10:44:16.097367] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.551 [2024-11-20 10:44:16.097382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.551 qpair failed and we were unable to recover it. 00:26:35.552 [2024-11-20 10:44:16.107300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.552 [2024-11-20 10:44:16.107356] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.552 [2024-11-20 10:44:16.107369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.552 [2024-11-20 10:44:16.107377] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.552 [2024-11-20 10:44:16.107384] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.552 [2024-11-20 10:44:16.107399] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.552 qpair failed and we were unable to recover it. 00:26:35.552 [2024-11-20 10:44:16.117323] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.552 [2024-11-20 10:44:16.117377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.552 [2024-11-20 10:44:16.117394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.552 [2024-11-20 10:44:16.117401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.552 [2024-11-20 10:44:16.117407] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.552 [2024-11-20 10:44:16.117423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.552 qpair failed and we were unable to recover it. 00:26:35.552 [2024-11-20 10:44:16.127359] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.552 [2024-11-20 10:44:16.127412] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.552 [2024-11-20 10:44:16.127426] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.552 [2024-11-20 10:44:16.127433] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.552 [2024-11-20 10:44:16.127440] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.552 [2024-11-20 10:44:16.127455] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.552 qpair failed and we were unable to recover it. 00:26:35.552 [2024-11-20 10:44:16.137391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.552 [2024-11-20 10:44:16.137470] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.552 [2024-11-20 10:44:16.137484] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.552 [2024-11-20 10:44:16.137495] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.552 [2024-11-20 10:44:16.137501] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.552 [2024-11-20 10:44:16.137516] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.552 qpair failed and we were unable to recover it. 00:26:35.552 [2024-11-20 10:44:16.147417] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.552 [2024-11-20 10:44:16.147475] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.552 [2024-11-20 10:44:16.147489] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.552 [2024-11-20 10:44:16.147496] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.552 [2024-11-20 10:44:16.147502] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.552 [2024-11-20 10:44:16.147517] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.552 qpair failed and we were unable to recover it. 00:26:35.552 [2024-11-20 10:44:16.157439] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.552 [2024-11-20 10:44:16.157491] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.552 [2024-11-20 10:44:16.157505] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.552 [2024-11-20 10:44:16.157512] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.552 [2024-11-20 10:44:16.157519] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.552 [2024-11-20 10:44:16.157534] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.552 qpair failed and we were unable to recover it. 00:26:35.552 [2024-11-20 10:44:16.167473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.552 [2024-11-20 10:44:16.167528] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.552 [2024-11-20 10:44:16.167542] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.552 [2024-11-20 10:44:16.167549] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.552 [2024-11-20 10:44:16.167555] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.552 [2024-11-20 10:44:16.167572] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.552 qpair failed and we were unable to recover it. 00:26:35.552 [2024-11-20 10:44:16.177528] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.552 [2024-11-20 10:44:16.177616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.552 [2024-11-20 10:44:16.177633] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.552 [2024-11-20 10:44:16.177640] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.552 [2024-11-20 10:44:16.177647] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.552 [2024-11-20 10:44:16.177667] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.552 qpair failed and we were unable to recover it. 00:26:35.552 [2024-11-20 10:44:16.187518] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.552 [2024-11-20 10:44:16.187572] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.552 [2024-11-20 10:44:16.187588] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.552 [2024-11-20 10:44:16.187595] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.552 [2024-11-20 10:44:16.187602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.552 [2024-11-20 10:44:16.187617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.552 qpair failed and we were unable to recover it. 00:26:35.552 [2024-11-20 10:44:16.197547] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.552 [2024-11-20 10:44:16.197603] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.552 [2024-11-20 10:44:16.197617] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.552 [2024-11-20 10:44:16.197625] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.552 [2024-11-20 10:44:16.197631] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.552 [2024-11-20 10:44:16.197647] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.552 qpair failed and we were unable to recover it. 00:26:35.552 [2024-11-20 10:44:16.207577] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.552 [2024-11-20 10:44:16.207634] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.552 [2024-11-20 10:44:16.207648] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.552 [2024-11-20 10:44:16.207655] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.552 [2024-11-20 10:44:16.207661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.552 [2024-11-20 10:44:16.207676] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.552 qpair failed and we were unable to recover it. 00:26:35.552 [2024-11-20 10:44:16.217608] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.552 [2024-11-20 10:44:16.217669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.552 [2024-11-20 10:44:16.217685] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.552 [2024-11-20 10:44:16.217693] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.552 [2024-11-20 10:44:16.217699] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.552 [2024-11-20 10:44:16.217715] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.552 qpair failed and we were unable to recover it. 00:26:35.552 [2024-11-20 10:44:16.227620] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.553 [2024-11-20 10:44:16.227676] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.553 [2024-11-20 10:44:16.227690] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.553 [2024-11-20 10:44:16.227698] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.553 [2024-11-20 10:44:16.227704] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.553 [2024-11-20 10:44:16.227719] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.553 qpair failed and we were unable to recover it. 00:26:35.553 [2024-11-20 10:44:16.237650] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.553 [2024-11-20 10:44:16.237699] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.553 [2024-11-20 10:44:16.237713] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.553 [2024-11-20 10:44:16.237720] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.553 [2024-11-20 10:44:16.237727] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.553 [2024-11-20 10:44:16.237741] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.553 qpair failed and we were unable to recover it. 00:26:35.553 [2024-11-20 10:44:16.247720] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.553 [2024-11-20 10:44:16.247775] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.553 [2024-11-20 10:44:16.247789] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.553 [2024-11-20 10:44:16.247796] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.553 [2024-11-20 10:44:16.247803] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.553 [2024-11-20 10:44:16.247818] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.553 qpair failed and we were unable to recover it. 00:26:35.553 [2024-11-20 10:44:16.257748] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.553 [2024-11-20 10:44:16.257806] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.553 [2024-11-20 10:44:16.257820] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.553 [2024-11-20 10:44:16.257827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.553 [2024-11-20 10:44:16.257833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.553 [2024-11-20 10:44:16.257848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.553 qpair failed and we were unable to recover it. 00:26:35.553 [2024-11-20 10:44:16.267747] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.553 [2024-11-20 10:44:16.267800] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.553 [2024-11-20 10:44:16.267819] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.553 [2024-11-20 10:44:16.267827] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.553 [2024-11-20 10:44:16.267833] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.553 [2024-11-20 10:44:16.267848] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.553 qpair failed and we were unable to recover it. 00:26:35.824 [2024-11-20 10:44:16.277815] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.824 [2024-11-20 10:44:16.277910] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.824 [2024-11-20 10:44:16.277928] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.824 [2024-11-20 10:44:16.277937] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.824 [2024-11-20 10:44:16.277944] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.824 [2024-11-20 10:44:16.277963] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.824 qpair failed and we were unable to recover it. 00:26:35.824 [2024-11-20 10:44:16.287853] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.824 [2024-11-20 10:44:16.287959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.824 [2024-11-20 10:44:16.287975] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.824 [2024-11-20 10:44:16.287982] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.824 [2024-11-20 10:44:16.287988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.824 [2024-11-20 10:44:16.288004] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.824 qpair failed and we were unable to recover it. 00:26:35.824 [2024-11-20 10:44:16.297848] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.824 [2024-11-20 10:44:16.297902] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.824 [2024-11-20 10:44:16.297916] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.824 [2024-11-20 10:44:16.297923] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.824 [2024-11-20 10:44:16.297930] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.824 [2024-11-20 10:44:16.297945] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.824 qpair failed and we were unable to recover it. 00:26:35.824 [2024-11-20 10:44:16.307898] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.824 [2024-11-20 10:44:16.307955] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.824 [2024-11-20 10:44:16.307968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.824 [2024-11-20 10:44:16.307976] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.824 [2024-11-20 10:44:16.307985] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.824 [2024-11-20 10:44:16.308001] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.824 qpair failed and we were unable to recover it. 00:26:35.825 [2024-11-20 10:44:16.317887] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.825 [2024-11-20 10:44:16.317944] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.825 [2024-11-20 10:44:16.317958] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.825 [2024-11-20 10:44:16.317966] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.825 [2024-11-20 10:44:16.317973] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.825 [2024-11-20 10:44:16.317988] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.825 qpair failed and we were unable to recover it. 00:26:35.825 [2024-11-20 10:44:16.327918] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.825 [2024-11-20 10:44:16.327973] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.825 [2024-11-20 10:44:16.327987] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.825 [2024-11-20 10:44:16.327994] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.825 [2024-11-20 10:44:16.328001] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.825 [2024-11-20 10:44:16.328016] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.825 qpair failed and we were unable to recover it. 00:26:35.825 [2024-11-20 10:44:16.337943] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.825 [2024-11-20 10:44:16.337998] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.825 [2024-11-20 10:44:16.338011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.825 [2024-11-20 10:44:16.338019] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.825 [2024-11-20 10:44:16.338025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.825 [2024-11-20 10:44:16.338041] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.825 qpair failed and we were unable to recover it. 00:26:35.825 [2024-11-20 10:44:16.348018] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.825 [2024-11-20 10:44:16.348100] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.825 [2024-11-20 10:44:16.348114] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.825 [2024-11-20 10:44:16.348121] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.825 [2024-11-20 10:44:16.348127] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.825 [2024-11-20 10:44:16.348142] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.825 qpair failed and we were unable to recover it. 00:26:35.825 [2024-11-20 10:44:16.358040] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.825 [2024-11-20 10:44:16.358095] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.825 [2024-11-20 10:44:16.358109] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.825 [2024-11-20 10:44:16.358116] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.825 [2024-11-20 10:44:16.358122] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.825 [2024-11-20 10:44:16.358137] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.825 qpair failed and we were unable to recover it. 00:26:35.825 [2024-11-20 10:44:16.368074] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.825 [2024-11-20 10:44:16.368136] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.825 [2024-11-20 10:44:16.368151] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.825 [2024-11-20 10:44:16.368159] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.825 [2024-11-20 10:44:16.368165] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.825 [2024-11-20 10:44:16.368181] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.825 qpair failed and we were unable to recover it. 00:26:35.825 [2024-11-20 10:44:16.378071] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.825 [2024-11-20 10:44:16.378129] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.825 [2024-11-20 10:44:16.378145] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.825 [2024-11-20 10:44:16.378152] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.825 [2024-11-20 10:44:16.378159] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.825 [2024-11-20 10:44:16.378175] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.825 qpair failed and we were unable to recover it. 00:26:35.825 [2024-11-20 10:44:16.388099] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.825 [2024-11-20 10:44:16.388186] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.825 [2024-11-20 10:44:16.388200] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.825 [2024-11-20 10:44:16.388211] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.825 [2024-11-20 10:44:16.388217] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.825 [2024-11-20 10:44:16.388232] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.825 qpair failed and we were unable to recover it. 00:26:35.825 [2024-11-20 10:44:16.398145] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.825 [2024-11-20 10:44:16.398199] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.825 [2024-11-20 10:44:16.398220] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.825 [2024-11-20 10:44:16.398228] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.825 [2024-11-20 10:44:16.398234] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.825 [2024-11-20 10:44:16.398249] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.825 qpair failed and we were unable to recover it. 00:26:35.825 [2024-11-20 10:44:16.408227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.825 [2024-11-20 10:44:16.408285] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.825 [2024-11-20 10:44:16.408298] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.825 [2024-11-20 10:44:16.408305] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.825 [2024-11-20 10:44:16.408311] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.825 [2024-11-20 10:44:16.408326] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.825 qpair failed and we were unable to recover it. 00:26:35.825 [2024-11-20 10:44:16.418221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.825 [2024-11-20 10:44:16.418275] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.825 [2024-11-20 10:44:16.418290] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.825 [2024-11-20 10:44:16.418297] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.825 [2024-11-20 10:44:16.418303] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.825 [2024-11-20 10:44:16.418318] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.825 qpair failed and we were unable to recover it. 00:26:35.825 [2024-11-20 10:44:16.428215] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.825 [2024-11-20 10:44:16.428272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.825 [2024-11-20 10:44:16.428285] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.825 [2024-11-20 10:44:16.428293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.825 [2024-11-20 10:44:16.428300] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.825 [2024-11-20 10:44:16.428315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.825 qpair failed and we were unable to recover it. 00:26:35.825 [2024-11-20 10:44:16.438227] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.825 [2024-11-20 10:44:16.438276] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.826 [2024-11-20 10:44:16.438289] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.826 [2024-11-20 10:44:16.438296] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.826 [2024-11-20 10:44:16.438306] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.826 [2024-11-20 10:44:16.438321] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.826 qpair failed and we were unable to recover it. 00:26:35.826 [2024-11-20 10:44:16.448266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.826 [2024-11-20 10:44:16.448320] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.826 [2024-11-20 10:44:16.448334] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.826 [2024-11-20 10:44:16.448341] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.826 [2024-11-20 10:44:16.448348] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.826 [2024-11-20 10:44:16.448363] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.826 qpair failed and we were unable to recover it. 00:26:35.826 [2024-11-20 10:44:16.458298] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.826 [2024-11-20 10:44:16.458354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.826 [2024-11-20 10:44:16.458368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.826 [2024-11-20 10:44:16.458376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.826 [2024-11-20 10:44:16.458383] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.826 [2024-11-20 10:44:16.458397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.826 qpair failed and we were unable to recover it. 00:26:35.826 [2024-11-20 10:44:16.468347] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.826 [2024-11-20 10:44:16.468400] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.826 [2024-11-20 10:44:16.468415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.826 [2024-11-20 10:44:16.468422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.826 [2024-11-20 10:44:16.468429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.826 [2024-11-20 10:44:16.468444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.826 qpair failed and we were unable to recover it. 00:26:35.826 [2024-11-20 10:44:16.478349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.826 [2024-11-20 10:44:16.478401] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.826 [2024-11-20 10:44:16.478415] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.826 [2024-11-20 10:44:16.478422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.826 [2024-11-20 10:44:16.478429] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.826 [2024-11-20 10:44:16.478444] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.826 qpair failed and we were unable to recover it. 00:26:35.826 [2024-11-20 10:44:16.488407] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.826 [2024-11-20 10:44:16.488465] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.826 [2024-11-20 10:44:16.488479] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.826 [2024-11-20 10:44:16.488486] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.826 [2024-11-20 10:44:16.488493] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.826 [2024-11-20 10:44:16.488508] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.826 qpair failed and we were unable to recover it. 00:26:35.826 [2024-11-20 10:44:16.498450] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.826 [2024-11-20 10:44:16.498507] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.826 [2024-11-20 10:44:16.498520] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.826 [2024-11-20 10:44:16.498528] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.826 [2024-11-20 10:44:16.498535] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.826 [2024-11-20 10:44:16.498549] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.826 qpair failed and we were unable to recover it. 00:26:35.826 [2024-11-20 10:44:16.508446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.826 [2024-11-20 10:44:16.508500] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.826 [2024-11-20 10:44:16.508513] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.826 [2024-11-20 10:44:16.508520] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.826 [2024-11-20 10:44:16.508527] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.826 [2024-11-20 10:44:16.508542] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.826 qpair failed and we were unable to recover it. 00:26:35.826 [2024-11-20 10:44:16.518467] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.826 [2024-11-20 10:44:16.518563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.826 [2024-11-20 10:44:16.518578] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.826 [2024-11-20 10:44:16.518586] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.826 [2024-11-20 10:44:16.518592] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.826 [2024-11-20 10:44:16.518607] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.826 qpair failed and we were unable to recover it. 00:26:35.826 [2024-11-20 10:44:16.528553] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.826 [2024-11-20 10:44:16.528608] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.826 [2024-11-20 10:44:16.528626] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.826 [2024-11-20 10:44:16.528633] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.826 [2024-11-20 10:44:16.528640] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.826 [2024-11-20 10:44:16.528655] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.826 qpair failed and we were unable to recover it. 00:26:35.826 [2024-11-20 10:44:16.538532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.826 [2024-11-20 10:44:16.538588] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.826 [2024-11-20 10:44:16.538602] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.826 [2024-11-20 10:44:16.538608] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.826 [2024-11-20 10:44:16.538615] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.826 [2024-11-20 10:44:16.538630] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.826 qpair failed and we were unable to recover it. 00:26:35.826 [2024-11-20 10:44:16.548559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:35.826 [2024-11-20 10:44:16.548613] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:35.826 [2024-11-20 10:44:16.548627] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:35.826 [2024-11-20 10:44:16.548635] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:35.826 [2024-11-20 10:44:16.548642] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:35.826 [2024-11-20 10:44:16.548657] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:35.826 qpair failed and we were unable to recover it. 00:26:36.086 [2024-11-20 10:44:16.558579] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.086 [2024-11-20 10:44:16.558636] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.086 [2024-11-20 10:44:16.558651] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.086 [2024-11-20 10:44:16.558659] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.086 [2024-11-20 10:44:16.558665] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.086 [2024-11-20 10:44:16.558680] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.086 qpair failed and we were unable to recover it. 00:26:36.086 [2024-11-20 10:44:16.568559] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.086 [2024-11-20 10:44:16.568616] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.086 [2024-11-20 10:44:16.568631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.086 [2024-11-20 10:44:16.568643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.086 [2024-11-20 10:44:16.568649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.086 [2024-11-20 10:44:16.568664] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.086 qpair failed and we were unable to recover it. 00:26:36.086 [2024-11-20 10:44:16.578653] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.086 [2024-11-20 10:44:16.578710] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.086 [2024-11-20 10:44:16.578725] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.086 [2024-11-20 10:44:16.578733] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.086 [2024-11-20 10:44:16.578739] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.086 [2024-11-20 10:44:16.578755] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.086 qpair failed and we were unable to recover it. 00:26:36.086 [2024-11-20 10:44:16.588727] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.086 [2024-11-20 10:44:16.588784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.086 [2024-11-20 10:44:16.588799] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.086 [2024-11-20 10:44:16.588806] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.086 [2024-11-20 10:44:16.588813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.086 [2024-11-20 10:44:16.588829] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.086 qpair failed and we were unable to recover it. 00:26:36.086 [2024-11-20 10:44:16.598623] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.086 [2024-11-20 10:44:16.598678] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.086 [2024-11-20 10:44:16.598692] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.086 [2024-11-20 10:44:16.598700] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.086 [2024-11-20 10:44:16.598707] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.086 [2024-11-20 10:44:16.598724] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.086 qpair failed and we were unable to recover it. 00:26:36.086 [2024-11-20 10:44:16.608795] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.086 [2024-11-20 10:44:16.608864] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.086 [2024-11-20 10:44:16.608878] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.087 [2024-11-20 10:44:16.608885] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.087 [2024-11-20 10:44:16.608892] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.087 [2024-11-20 10:44:16.608913] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.087 qpair failed and we were unable to recover it. 00:26:36.087 [2024-11-20 10:44:16.618763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.087 [2024-11-20 10:44:16.618815] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.087 [2024-11-20 10:44:16.618831] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.087 [2024-11-20 10:44:16.618838] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.087 [2024-11-20 10:44:16.618844] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.087 [2024-11-20 10:44:16.618859] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.087 qpair failed and we were unable to recover it. 00:26:36.087 [2024-11-20 10:44:16.628796] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.087 [2024-11-20 10:44:16.628851] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.087 [2024-11-20 10:44:16.628865] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.087 [2024-11-20 10:44:16.628872] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.087 [2024-11-20 10:44:16.628878] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.087 [2024-11-20 10:44:16.628895] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.087 qpair failed and we were unable to recover it. 00:26:36.087 [2024-11-20 10:44:16.638841] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.087 [2024-11-20 10:44:16.638898] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.087 [2024-11-20 10:44:16.638913] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.087 [2024-11-20 10:44:16.638920] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.087 [2024-11-20 10:44:16.638926] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.087 [2024-11-20 10:44:16.638941] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.087 qpair failed and we were unable to recover it. 00:26:36.087 [2024-11-20 10:44:16.648849] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.087 [2024-11-20 10:44:16.648908] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.087 [2024-11-20 10:44:16.648921] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.087 [2024-11-20 10:44:16.648929] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.087 [2024-11-20 10:44:16.648936] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.087 [2024-11-20 10:44:16.648951] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.087 qpair failed and we were unable to recover it. 00:26:36.087 [2024-11-20 10:44:16.658867] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.087 [2024-11-20 10:44:16.658954] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.087 [2024-11-20 10:44:16.658968] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.087 [2024-11-20 10:44:16.658975] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.087 [2024-11-20 10:44:16.658981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.087 [2024-11-20 10:44:16.658996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.087 qpair failed and we were unable to recover it. 00:26:36.087 [2024-11-20 10:44:16.668896] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.087 [2024-11-20 10:44:16.668952] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.087 [2024-11-20 10:44:16.668967] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.087 [2024-11-20 10:44:16.668974] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.087 [2024-11-20 10:44:16.668981] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.087 [2024-11-20 10:44:16.668996] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.087 qpair failed and we were unable to recover it. 00:26:36.087 [2024-11-20 10:44:16.678926] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.087 [2024-11-20 10:44:16.678980] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.087 [2024-11-20 10:44:16.678994] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.087 [2024-11-20 10:44:16.679001] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.087 [2024-11-20 10:44:16.679008] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.087 [2024-11-20 10:44:16.679024] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.087 qpair failed and we were unable to recover it. 00:26:36.087 [2024-11-20 10:44:16.688940] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.087 [2024-11-20 10:44:16.688996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.087 [2024-11-20 10:44:16.689011] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.087 [2024-11-20 10:44:16.689018] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.087 [2024-11-20 10:44:16.689025] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.087 [2024-11-20 10:44:16.689040] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.087 qpair failed and we were unable to recover it. 00:26:36.087 [2024-11-20 10:44:16.698990] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.087 [2024-11-20 10:44:16.699044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.087 [2024-11-20 10:44:16.699060] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.087 [2024-11-20 10:44:16.699071] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.087 [2024-11-20 10:44:16.699077] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.087 [2024-11-20 10:44:16.699093] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.087 qpair failed and we were unable to recover it. 00:26:36.087 [2024-11-20 10:44:16.709024] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.087 [2024-11-20 10:44:16.709113] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.087 [2024-11-20 10:44:16.709128] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.087 [2024-11-20 10:44:16.709135] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.087 [2024-11-20 10:44:16.709141] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.087 [2024-11-20 10:44:16.709157] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.087 qpair failed and we were unable to recover it. 00:26:36.087 [2024-11-20 10:44:16.719038] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.087 [2024-11-20 10:44:16.719088] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.087 [2024-11-20 10:44:16.719103] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.087 [2024-11-20 10:44:16.719110] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.087 [2024-11-20 10:44:16.719117] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.087 [2024-11-20 10:44:16.719133] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.087 qpair failed and we were unable to recover it. 00:26:36.087 [2024-11-20 10:44:16.729072] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.087 [2024-11-20 10:44:16.729130] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.087 [2024-11-20 10:44:16.729144] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.087 [2024-11-20 10:44:16.729151] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.087 [2024-11-20 10:44:16.729158] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.087 [2024-11-20 10:44:16.729174] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.087 qpair failed and we were unable to recover it. 00:26:36.087 [2024-11-20 10:44:16.739092] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.087 [2024-11-20 10:44:16.739151] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.088 [2024-11-20 10:44:16.739165] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.088 [2024-11-20 10:44:16.739172] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.088 [2024-11-20 10:44:16.739179] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.088 [2024-11-20 10:44:16.739198] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.088 qpair failed and we were unable to recover it. 00:26:36.088 [2024-11-20 10:44:16.749126] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.088 [2024-11-20 10:44:16.749191] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.088 [2024-11-20 10:44:16.749210] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.088 [2024-11-20 10:44:16.749219] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.088 [2024-11-20 10:44:16.749227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.088 [2024-11-20 10:44:16.749244] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.088 qpair failed and we were unable to recover it. 00:26:36.088 [2024-11-20 10:44:16.759142] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.088 [2024-11-20 10:44:16.759194] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.088 [2024-11-20 10:44:16.759213] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.088 [2024-11-20 10:44:16.759220] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.088 [2024-11-20 10:44:16.759227] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.088 [2024-11-20 10:44:16.759243] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.088 qpair failed and we were unable to recover it. 00:26:36.088 [2024-11-20 10:44:16.769162] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.088 [2024-11-20 10:44:16.769225] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.088 [2024-11-20 10:44:16.769240] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.088 [2024-11-20 10:44:16.769248] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.088 [2024-11-20 10:44:16.769255] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.088 [2024-11-20 10:44:16.769270] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.088 qpair failed and we were unable to recover it. 00:26:36.088 [2024-11-20 10:44:16.779214] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.088 [2024-11-20 10:44:16.779272] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.088 [2024-11-20 10:44:16.779286] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.088 [2024-11-20 10:44:16.779293] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.088 [2024-11-20 10:44:16.779299] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.088 [2024-11-20 10:44:16.779315] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.088 qpair failed and we were unable to recover it. 00:26:36.088 [2024-11-20 10:44:16.789243] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.088 [2024-11-20 10:44:16.789302] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.088 [2024-11-20 10:44:16.789316] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.088 [2024-11-20 10:44:16.789324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.088 [2024-11-20 10:44:16.789330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.088 [2024-11-20 10:44:16.789345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.088 qpair failed and we were unable to recover it. 00:26:36.088 [2024-11-20 10:44:16.799289] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.088 [2024-11-20 10:44:16.799355] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.088 [2024-11-20 10:44:16.799369] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.088 [2024-11-20 10:44:16.799376] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.088 [2024-11-20 10:44:16.799382] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.088 [2024-11-20 10:44:16.799397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.088 qpair failed and we were unable to recover it. 00:26:36.088 [2024-11-20 10:44:16.809293] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.088 [2024-11-20 10:44:16.809351] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.088 [2024-11-20 10:44:16.809364] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.088 [2024-11-20 10:44:16.809371] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.088 [2024-11-20 10:44:16.809378] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.088 [2024-11-20 10:44:16.809393] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.088 qpair failed and we were unable to recover it. 00:26:36.348 [2024-11-20 10:44:16.819325] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.348 [2024-11-20 10:44:16.819379] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.348 [2024-11-20 10:44:16.819394] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.348 [2024-11-20 10:44:16.819401] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.348 [2024-11-20 10:44:16.819408] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.348 [2024-11-20 10:44:16.819423] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.348 qpair failed and we were unable to recover it. 00:26:36.348 [2024-11-20 10:44:16.829350] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.348 [2024-11-20 10:44:16.829404] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.348 [2024-11-20 10:44:16.829421] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.348 [2024-11-20 10:44:16.829428] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.348 [2024-11-20 10:44:16.829436] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.348 [2024-11-20 10:44:16.829451] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.348 qpair failed and we were unable to recover it. 00:26:36.348 [2024-11-20 10:44:16.839376] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.348 [2024-11-20 10:44:16.839429] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.348 [2024-11-20 10:44:16.839443] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.348 [2024-11-20 10:44:16.839450] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.348 [2024-11-20 10:44:16.839456] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.348 [2024-11-20 10:44:16.839471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.348 qpair failed and we were unable to recover it. 00:26:36.348 [2024-11-20 10:44:16.849391] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.348 [2024-11-20 10:44:16.849446] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.348 [2024-11-20 10:44:16.849460] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.348 [2024-11-20 10:44:16.849467] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.348 [2024-11-20 10:44:16.849474] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.348 [2024-11-20 10:44:16.849488] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.348 qpair failed and we were unable to recover it. 00:26:36.348 [2024-11-20 10:44:16.859446] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.348 [2024-11-20 10:44:16.859501] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.348 [2024-11-20 10:44:16.859515] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.348 [2024-11-20 10:44:16.859522] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.348 [2024-11-20 10:44:16.859528] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.348 [2024-11-20 10:44:16.859544] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.348 qpair failed and we were unable to recover it. 00:26:36.348 [2024-11-20 10:44:16.869512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.348 [2024-11-20 10:44:16.869571] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.348 [2024-11-20 10:44:16.869585] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.348 [2024-11-20 10:44:16.869592] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.348 [2024-11-20 10:44:16.869602] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.348 [2024-11-20 10:44:16.869617] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.348 qpair failed and we were unable to recover it. 00:26:36.348 [2024-11-20 10:44:16.879499] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.348 [2024-11-20 10:44:16.879563] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.348 [2024-11-20 10:44:16.879577] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.348 [2024-11-20 10:44:16.879585] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.348 [2024-11-20 10:44:16.879590] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.348 [2024-11-20 10:44:16.879605] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.348 qpair failed and we were unable to recover it. 00:26:36.348 [2024-11-20 10:44:16.889533] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.348 [2024-11-20 10:44:16.889592] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.348 [2024-11-20 10:44:16.889605] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.348 [2024-11-20 10:44:16.889612] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.348 [2024-11-20 10:44:16.889618] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.348 [2024-11-20 10:44:16.889634] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.348 qpair failed and we were unable to recover it. 00:26:36.348 [2024-11-20 10:44:16.899571] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.348 [2024-11-20 10:44:16.899622] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.348 [2024-11-20 10:44:16.899636] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.348 [2024-11-20 10:44:16.899643] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.348 [2024-11-20 10:44:16.899649] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.348 [2024-11-20 10:44:16.899665] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.348 qpair failed and we were unable to recover it. 00:26:36.348 [2024-11-20 10:44:16.909562] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.348 [2024-11-20 10:44:16.909618] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.348 [2024-11-20 10:44:16.909631] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.348 [2024-11-20 10:44:16.909639] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.348 [2024-11-20 10:44:16.909645] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.348 [2024-11-20 10:44:16.909661] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.348 qpair failed and we were unable to recover it. 00:26:36.348 [2024-11-20 10:44:16.919637] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.348 [2024-11-20 10:44:16.919701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.348 [2024-11-20 10:44:16.919716] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.348 [2024-11-20 10:44:16.919724] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.348 [2024-11-20 10:44:16.919730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.348 [2024-11-20 10:44:16.919745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.348 qpair failed and we were unable to recover it. 00:26:36.348 [2024-11-20 10:44:16.929640] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.348 [2024-11-20 10:44:16.929728] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.348 [2024-11-20 10:44:16.929743] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.348 [2024-11-20 10:44:16.929751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.348 [2024-11-20 10:44:16.929757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.348 [2024-11-20 10:44:16.929772] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.348 qpair failed and we were unable to recover it. 00:26:36.348 [2024-11-20 10:44:16.939677] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.348 [2024-11-20 10:44:16.939736] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.348 [2024-11-20 10:44:16.939750] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.348 [2024-11-20 10:44:16.939758] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.348 [2024-11-20 10:44:16.939764] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.348 [2024-11-20 10:44:16.939779] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.348 qpair failed and we were unable to recover it. 00:26:36.348 [2024-11-20 10:44:16.949666] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.348 [2024-11-20 10:44:16.949730] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.349 [2024-11-20 10:44:16.949744] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.349 [2024-11-20 10:44:16.949751] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.349 [2024-11-20 10:44:16.949757] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.349 [2024-11-20 10:44:16.949773] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.349 qpair failed and we were unable to recover it. 00:26:36.349 [2024-11-20 10:44:16.959729] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.349 [2024-11-20 10:44:16.959783] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.349 [2024-11-20 10:44:16.959800] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.349 [2024-11-20 10:44:16.959807] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.349 [2024-11-20 10:44:16.959813] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.349 [2024-11-20 10:44:16.959828] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.349 qpair failed and we were unable to recover it. 00:26:36.349 [2024-11-20 10:44:16.969763] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.349 [2024-11-20 10:44:16.969822] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.349 [2024-11-20 10:44:16.969836] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.349 [2024-11-20 10:44:16.969844] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.349 [2024-11-20 10:44:16.969850] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.349 [2024-11-20 10:44:16.969865] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.349 qpair failed and we were unable to recover it. 00:26:36.349 [2024-11-20 10:44:16.979785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.349 [2024-11-20 10:44:16.979843] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.349 [2024-11-20 10:44:16.979857] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.349 [2024-11-20 10:44:16.979864] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.349 [2024-11-20 10:44:16.979871] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.349 [2024-11-20 10:44:16.979886] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.349 qpair failed and we were unable to recover it. 00:26:36.349 [2024-11-20 10:44:16.989803] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.349 [2024-11-20 10:44:16.989854] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.349 [2024-11-20 10:44:16.989868] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.349 [2024-11-20 10:44:16.989876] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.349 [2024-11-20 10:44:16.989882] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.349 [2024-11-20 10:44:16.989897] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.349 qpair failed and we were unable to recover it. 00:26:36.349 [2024-11-20 10:44:16.999811] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.349 [2024-11-20 10:44:16.999870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.349 [2024-11-20 10:44:16.999884] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.349 [2024-11-20 10:44:16.999891] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.349 [2024-11-20 10:44:16.999902] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.349 [2024-11-20 10:44:16.999917] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.349 qpair failed and we were unable to recover it. 00:26:36.349 [2024-11-20 10:44:17.009884] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.349 [2024-11-20 10:44:17.009951] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.349 [2024-11-20 10:44:17.009965] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.349 [2024-11-20 10:44:17.009973] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.349 [2024-11-20 10:44:17.009979] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.349 [2024-11-20 10:44:17.009993] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.349 qpair failed and we were unable to recover it. 00:26:36.349 [2024-11-20 10:44:17.019869] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.349 [2024-11-20 10:44:17.019925] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.349 [2024-11-20 10:44:17.019941] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.349 [2024-11-20 10:44:17.019948] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.349 [2024-11-20 10:44:17.019955] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.349 [2024-11-20 10:44:17.019970] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.349 qpair failed and we were unable to recover it. 00:26:36.349 [2024-11-20 10:44:17.029961] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.349 [2024-11-20 10:44:17.030014] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.349 [2024-11-20 10:44:17.030029] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.349 [2024-11-20 10:44:17.030035] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.349 [2024-11-20 10:44:17.030042] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.349 [2024-11-20 10:44:17.030057] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.349 qpair failed and we were unable to recover it. 00:26:36.349 [2024-11-20 10:44:17.039957] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.349 [2024-11-20 10:44:17.040019] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.349 [2024-11-20 10:44:17.040033] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.349 [2024-11-20 10:44:17.040041] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.349 [2024-11-20 10:44:17.040047] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.349 [2024-11-20 10:44:17.040062] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.349 qpair failed and we were unable to recover it. 00:26:36.349 [2024-11-20 10:44:17.050016] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.349 [2024-11-20 10:44:17.050070] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.349 [2024-11-20 10:44:17.050084] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.349 [2024-11-20 10:44:17.050091] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.349 [2024-11-20 10:44:17.050097] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.349 [2024-11-20 10:44:17.050114] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.349 qpair failed and we were unable to recover it. 00:26:36.349 [2024-11-20 10:44:17.059989] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.349 [2024-11-20 10:44:17.060075] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.349 [2024-11-20 10:44:17.060089] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.349 [2024-11-20 10:44:17.060097] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.349 [2024-11-20 10:44:17.060103] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.349 [2024-11-20 10:44:17.060117] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.349 qpair failed and we were unable to recover it. 00:26:36.349 [2024-11-20 10:44:17.070035] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.349 [2024-11-20 10:44:17.070091] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.349 [2024-11-20 10:44:17.070106] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.349 [2024-11-20 10:44:17.070113] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.349 [2024-11-20 10:44:17.070120] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.349 [2024-11-20 10:44:17.070136] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.349 qpair failed and we were unable to recover it. 00:26:36.609 [2024-11-20 10:44:17.080114] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.609 [2024-11-20 10:44:17.080169] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.609 [2024-11-20 10:44:17.080182] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.609 [2024-11-20 10:44:17.080189] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.609 [2024-11-20 10:44:17.080196] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.609 [2024-11-20 10:44:17.080215] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.609 qpair failed and we were unable to recover it. 00:26:36.609 [2024-11-20 10:44:17.090088] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.609 [2024-11-20 10:44:17.090145] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.609 [2024-11-20 10:44:17.090163] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.609 [2024-11-20 10:44:17.090170] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.609 [2024-11-20 10:44:17.090177] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.609 [2024-11-20 10:44:17.090192] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.609 qpair failed and we were unable to recover it. 00:26:36.609 [2024-11-20 10:44:17.100117] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.609 [2024-11-20 10:44:17.100174] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.609 [2024-11-20 10:44:17.100188] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.609 [2024-11-20 10:44:17.100196] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.609 [2024-11-20 10:44:17.100206] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.609 [2024-11-20 10:44:17.100222] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.609 qpair failed and we were unable to recover it. 00:26:36.609 [2024-11-20 10:44:17.110212] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.609 [2024-11-20 10:44:17.110270] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.609 [2024-11-20 10:44:17.110283] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.609 [2024-11-20 10:44:17.110291] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.609 [2024-11-20 10:44:17.110298] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.609 [2024-11-20 10:44:17.110314] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.609 qpair failed and we were unable to recover it. 00:26:36.609 [2024-11-20 10:44:17.120223] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.609 [2024-11-20 10:44:17.120290] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.609 [2024-11-20 10:44:17.120306] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.609 [2024-11-20 10:44:17.120313] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.609 [2024-11-20 10:44:17.120319] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.609 [2024-11-20 10:44:17.120334] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.609 qpair failed and we were unable to recover it. 00:26:36.609 [2024-11-20 10:44:17.130221] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.609 [2024-11-20 10:44:17.130298] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.609 [2024-11-20 10:44:17.130312] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.609 [2024-11-20 10:44:17.130324] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.609 [2024-11-20 10:44:17.130330] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.609 [2024-11-20 10:44:17.130345] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.609 qpair failed and we were unable to recover it. 00:26:36.609 [2024-11-20 10:44:17.140266] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.609 [2024-11-20 10:44:17.140337] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.609 [2024-11-20 10:44:17.140351] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.609 [2024-11-20 10:44:17.140358] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.609 [2024-11-20 10:44:17.140365] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.609 [2024-11-20 10:44:17.140382] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.609 qpair failed and we were unable to recover it. 00:26:36.609 [2024-11-20 10:44:17.150254] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.609 [2024-11-20 10:44:17.150339] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.610 [2024-11-20 10:44:17.150353] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.610 [2024-11-20 10:44:17.150360] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.610 [2024-11-20 10:44:17.150366] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.610 [2024-11-20 10:44:17.150381] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.610 qpair failed and we were unable to recover it. 00:26:36.610 [2024-11-20 10:44:17.160277] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.610 [2024-11-20 10:44:17.160382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.610 [2024-11-20 10:44:17.160397] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.610 [2024-11-20 10:44:17.160403] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.610 [2024-11-20 10:44:17.160410] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.610 [2024-11-20 10:44:17.160426] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.610 qpair failed and we were unable to recover it. 00:26:36.610 [2024-11-20 10:44:17.170349] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.610 [2024-11-20 10:44:17.170410] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.610 [2024-11-20 10:44:17.170425] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.610 [2024-11-20 10:44:17.170432] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.610 [2024-11-20 10:44:17.170439] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.610 [2024-11-20 10:44:17.170458] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.610 qpair failed and we were unable to recover it. 00:26:36.610 [2024-11-20 10:44:17.180292] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.610 [2024-11-20 10:44:17.180354] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.610 [2024-11-20 10:44:17.180368] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.610 [2024-11-20 10:44:17.180375] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.610 [2024-11-20 10:44:17.180381] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.610 [2024-11-20 10:44:17.180397] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.610 qpair failed and we were unable to recover it. 00:26:36.610 [2024-11-20 10:44:17.190333] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.610 [2024-11-20 10:44:17.190386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.610 [2024-11-20 10:44:17.190401] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.610 [2024-11-20 10:44:17.190408] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.610 [2024-11-20 10:44:17.190414] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.610 [2024-11-20 10:44:17.190430] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.610 qpair failed and we were unable to recover it. 00:26:36.610 [2024-11-20 10:44:17.200371] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.610 [2024-11-20 10:44:17.200463] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.610 [2024-11-20 10:44:17.200477] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.610 [2024-11-20 10:44:17.200485] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.610 [2024-11-20 10:44:17.200491] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.610 [2024-11-20 10:44:17.200506] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.610 qpair failed and we were unable to recover it. 00:26:36.610 [2024-11-20 10:44:17.210473] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.610 [2024-11-20 10:44:17.210539] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.610 [2024-11-20 10:44:17.210554] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.610 [2024-11-20 10:44:17.210562] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.610 [2024-11-20 10:44:17.210568] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.610 [2024-11-20 10:44:17.210584] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.610 qpair failed and we were unable to recover it. 00:26:36.610 [2024-11-20 10:44:17.220490] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.610 [2024-11-20 10:44:17.220546] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.610 [2024-11-20 10:44:17.220561] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.610 [2024-11-20 10:44:17.220568] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.610 [2024-11-20 10:44:17.220574] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.610 [2024-11-20 10:44:17.220590] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.610 qpair failed and we were unable to recover it. 00:26:36.610 [2024-11-20 10:44:17.230434] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.610 [2024-11-20 10:44:17.230489] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.610 [2024-11-20 10:44:17.230504] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.610 [2024-11-20 10:44:17.230510] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.610 [2024-11-20 10:44:17.230517] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.610 [2024-11-20 10:44:17.230533] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.610 qpair failed and we were unable to recover it. 00:26:36.610 [2024-11-20 10:44:17.240512] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.610 [2024-11-20 10:44:17.240561] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.610 [2024-11-20 10:44:17.240575] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.610 [2024-11-20 10:44:17.240582] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.610 [2024-11-20 10:44:17.240589] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.610 [2024-11-20 10:44:17.240604] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.610 qpair failed and we were unable to recover it. 00:26:36.610 [2024-11-20 10:44:17.250573] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.610 [2024-11-20 10:44:17.250630] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.610 [2024-11-20 10:44:17.250646] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.610 [2024-11-20 10:44:17.250654] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.610 [2024-11-20 10:44:17.250661] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.610 [2024-11-20 10:44:17.250677] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.610 qpair failed and we were unable to recover it. 00:26:36.610 [2024-11-20 10:44:17.260532] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.610 [2024-11-20 10:44:17.260590] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.610 [2024-11-20 10:44:17.260603] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.610 [2024-11-20 10:44:17.260614] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.610 [2024-11-20 10:44:17.260621] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.610 [2024-11-20 10:44:17.260636] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.610 qpair failed and we were unable to recover it. 00:26:36.610 [2024-11-20 10:44:17.270610] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.610 [2024-11-20 10:44:17.270669] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.610 [2024-11-20 10:44:17.270684] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.610 [2024-11-20 10:44:17.270690] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.610 [2024-11-20 10:44:17.270697] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.610 [2024-11-20 10:44:17.270712] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.610 qpair failed and we were unable to recover it. 00:26:36.610 [2024-11-20 10:44:17.280652] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.611 [2024-11-20 10:44:17.280701] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.611 [2024-11-20 10:44:17.280715] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.611 [2024-11-20 10:44:17.280722] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.611 [2024-11-20 10:44:17.280730] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.611 [2024-11-20 10:44:17.280745] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.611 qpair failed and we were unable to recover it. 00:26:36.611 [2024-11-20 10:44:17.290708] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.611 [2024-11-20 10:44:17.290765] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.611 [2024-11-20 10:44:17.290779] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.611 [2024-11-20 10:44:17.290786] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.611 [2024-11-20 10:44:17.290792] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.611 [2024-11-20 10:44:17.290808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.611 qpair failed and we were unable to recover it. 00:26:36.611 [2024-11-20 10:44:17.300710] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.611 [2024-11-20 10:44:17.300762] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.611 [2024-11-20 10:44:17.300776] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.611 [2024-11-20 10:44:17.300783] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.611 [2024-11-20 10:44:17.300789] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.611 [2024-11-20 10:44:17.300808] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.611 qpair failed and we were unable to recover it. 00:26:36.611 [2024-11-20 10:44:17.310771] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.611 [2024-11-20 10:44:17.310870] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.611 [2024-11-20 10:44:17.310886] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.611 [2024-11-20 10:44:17.310894] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.611 [2024-11-20 10:44:17.310901] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.611 [2024-11-20 10:44:17.310916] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.611 qpair failed and we were unable to recover it. 00:26:36.611 [2024-11-20 10:44:17.320744] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.611 [2024-11-20 10:44:17.320796] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.611 [2024-11-20 10:44:17.320810] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.611 [2024-11-20 10:44:17.320818] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.611 [2024-11-20 10:44:17.320824] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.611 [2024-11-20 10:44:17.320839] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.611 qpair failed and we were unable to recover it. 00:26:36.611 [2024-11-20 10:44:17.330728] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.611 [2024-11-20 10:44:17.330784] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.611 [2024-11-20 10:44:17.330798] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.611 [2024-11-20 10:44:17.330805] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.611 [2024-11-20 10:44:17.330812] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.611 [2024-11-20 10:44:17.330827] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.611 qpair failed and we were unable to recover it. 00:26:36.870 [2024-11-20 10:44:17.340832] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.870 [2024-11-20 10:44:17.340885] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.870 [2024-11-20 10:44:17.340899] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.870 [2024-11-20 10:44:17.340905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.870 [2024-11-20 10:44:17.340912] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.870 [2024-11-20 10:44:17.340927] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.870 qpair failed and we were unable to recover it. 00:26:36.870 [2024-11-20 10:44:17.350785] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.870 [2024-11-20 10:44:17.350887] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.870 [2024-11-20 10:44:17.350901] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.870 [2024-11-20 10:44:17.350909] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.870 [2024-11-20 10:44:17.350917] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.870 [2024-11-20 10:44:17.350933] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.870 qpair failed and we were unable to recover it. 00:26:36.870 [2024-11-20 10:44:17.360804] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.870 [2024-11-20 10:44:17.360884] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.871 [2024-11-20 10:44:17.360898] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.871 [2024-11-20 10:44:17.360905] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.871 [2024-11-20 10:44:17.360911] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.871 [2024-11-20 10:44:17.360926] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.871 qpair failed and we were unable to recover it. 00:26:36.871 [2024-11-20 10:44:17.370903] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.871 [2024-11-20 10:44:17.370959] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.871 [2024-11-20 10:44:17.370974] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.871 [2024-11-20 10:44:17.370981] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.871 [2024-11-20 10:44:17.370988] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.871 [2024-11-20 10:44:17.371003] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.871 qpair failed and we were unable to recover it. 00:26:36.871 [2024-11-20 10:44:17.380941] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.871 [2024-11-20 10:44:17.380996] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.871 [2024-11-20 10:44:17.381010] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.871 [2024-11-20 10:44:17.381017] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.871 [2024-11-20 10:44:17.381024] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.871 [2024-11-20 10:44:17.381039] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.871 qpair failed and we were unable to recover it. 00:26:36.871 [2024-11-20 10:44:17.390978] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.871 [2024-11-20 10:44:17.391030] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.871 [2024-11-20 10:44:17.391048] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.871 [2024-11-20 10:44:17.391055] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.871 [2024-11-20 10:44:17.391061] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.871 [2024-11-20 10:44:17.391077] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.871 qpair failed and we were unable to recover it. 00:26:36.871 [2024-11-20 10:44:17.400994] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.871 [2024-11-20 10:44:17.401044] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.871 [2024-11-20 10:44:17.401058] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.871 [2024-11-20 10:44:17.401065] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.871 [2024-11-20 10:44:17.401072] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.871 [2024-11-20 10:44:17.401088] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.871 qpair failed and we were unable to recover it. 00:26:36.871 [2024-11-20 10:44:17.411034] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.871 [2024-11-20 10:44:17.411086] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.871 [2024-11-20 10:44:17.411102] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.871 [2024-11-20 10:44:17.411109] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.871 [2024-11-20 10:44:17.411116] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.871 [2024-11-20 10:44:17.411131] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.871 qpair failed and we were unable to recover it. 00:26:36.871 [2024-11-20 10:44:17.421060] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.871 [2024-11-20 10:44:17.421115] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.871 [2024-11-20 10:44:17.421130] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.871 [2024-11-20 10:44:17.421137] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.871 [2024-11-20 10:44:17.421144] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.871 [2024-11-20 10:44:17.421159] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.871 qpair failed and we were unable to recover it. 00:26:36.871 [2024-11-20 10:44:17.431137] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.871 [2024-11-20 10:44:17.431197] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.871 [2024-11-20 10:44:17.431214] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.871 [2024-11-20 10:44:17.431221] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.871 [2024-11-20 10:44:17.431231] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.871 [2024-11-20 10:44:17.431247] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.871 qpair failed and we were unable to recover it. 00:26:36.871 [2024-11-20 10:44:17.441107] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.871 [2024-11-20 10:44:17.441162] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.871 [2024-11-20 10:44:17.441176] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.871 [2024-11-20 10:44:17.441183] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.871 [2024-11-20 10:44:17.441190] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.871 [2024-11-20 10:44:17.441209] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.871 qpair failed and we were unable to recover it. 00:26:36.871 [2024-11-20 10:44:17.451151] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.871 [2024-11-20 10:44:17.451208] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.871 [2024-11-20 10:44:17.451222] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.871 [2024-11-20 10:44:17.451229] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.871 [2024-11-20 10:44:17.451236] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.871 [2024-11-20 10:44:17.451251] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.871 qpair failed and we were unable to recover it. 00:26:36.871 [2024-11-20 10:44:17.461168] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.871 [2024-11-20 10:44:17.461222] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.871 [2024-11-20 10:44:17.461236] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.871 [2024-11-20 10:44:17.461243] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.871 [2024-11-20 10:44:17.461250] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.871 [2024-11-20 10:44:17.461265] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.872 qpair failed and we were unable to recover it. 00:26:36.872 [2024-11-20 10:44:17.471196] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.872 [2024-11-20 10:44:17.471264] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.872 [2024-11-20 10:44:17.471279] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.872 [2024-11-20 10:44:17.471287] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.872 [2024-11-20 10:44:17.471293] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.872 [2024-11-20 10:44:17.471310] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.872 qpair failed and we were unable to recover it. 00:26:36.872 [2024-11-20 10:44:17.481219] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.872 [2024-11-20 10:44:17.481313] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.872 [2024-11-20 10:44:17.481328] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.872 [2024-11-20 10:44:17.481335] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.872 [2024-11-20 10:44:17.481341] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.872 [2024-11-20 10:44:17.481357] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.872 qpair failed and we were unable to recover it. 00:26:36.872 [2024-11-20 10:44:17.491261] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.872 [2024-11-20 10:44:17.491330] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.872 [2024-11-20 10:44:17.491344] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.872 [2024-11-20 10:44:17.491352] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.872 [2024-11-20 10:44:17.491358] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff268000b90 00:26:36.872 [2024-11-20 10:44:17.491374] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:26:36.872 qpair failed and we were unable to recover it. 00:26:36.872 [2024-11-20 10:44:17.501287] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.872 [2024-11-20 10:44:17.501391] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.872 [2024-11-20 10:44:17.501449] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.872 [2024-11-20 10:44:17.501479] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.872 [2024-11-20 10:44:17.501503] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff25c000b90 00:26:36.872 [2024-11-20 10:44:17.501556] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:36.872 qpair failed and we were unable to recover it. 00:26:36.872 [2024-11-20 10:44:17.511300] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.872 [2024-11-20 10:44:17.511382] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.872 [2024-11-20 10:44:17.511410] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.872 [2024-11-20 10:44:17.511424] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.872 [2024-11-20 10:44:17.511438] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff25c000b90 00:26:36.872 [2024-11-20 10:44:17.511471] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:36.872 qpair failed and we were unable to recover it. 00:26:36.872 [2024-11-20 10:44:17.521326] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.872 [2024-11-20 10:44:17.521386] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.872 [2024-11-20 10:44:17.521411] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.872 [2024-11-20 10:44:17.521422] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.872 [2024-11-20 10:44:17.521431] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff25c000b90 00:26:36.872 [2024-11-20 10:44:17.521452] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:36.872 qpair failed and we were unable to recover it. 00:26:36.872 [2024-11-20 10:44:17.531330] ctrlr.c: 762:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:26:36.872 [2024-11-20 10:44:17.531388] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:26:36.872 [2024-11-20 10:44:17.531402] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:26:36.872 [2024-11-20 10:44:17.531409] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:26:36.872 [2024-11-20 10:44:17.531415] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7ff25c000b90 00:26:36.872 [2024-11-20 10:44:17.531431] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] CQ transport error -6 (No such device or address) on qpair id 4 00:26:36.872 qpair failed and we were unable to recover it. 00:26:36.872 [2024-11-20 10:44:17.531585] nvme_ctrlr.c:4518:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1, 1] Submitting Keep Alive failed 00:26:36.872 A controller has encountered a failure and is being reset. 00:26:37.129 Controller properly reset. 00:26:37.129 Initializing NVMe Controllers 00:26:37.129 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:37.129 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:26:37.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:26:37.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:26:37.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:26:37.129 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:26:37.129 Initialization complete. Launching workers. 00:26:37.129 Starting thread on core 1 00:26:37.129 Starting thread on core 2 00:26:37.129 Starting thread on core 3 00:26:37.129 Starting thread on core 0 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:26:37.129 00:26:37.129 real 0m10.890s 00:26:37.129 user 0m19.352s 00:26:37.129 sys 0m4.733s 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:26:37.129 ************************************ 00:26:37.129 END TEST nvmf_target_disconnect_tc2 00:26:37.129 ************************************ 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # nvmfcleanup 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@99 -- # sync 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@102 -- # set +e 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@103 -- # for i in {1..20} 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:26:37.129 rmmod nvme_tcp 00:26:37.129 rmmod nvme_fabrics 00:26:37.129 rmmod nvme_keyring 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@106 -- # set -e 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@107 -- # return 0 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@336 -- # '[' -n 3364438 ']' 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@337 -- # killprocess 3364438 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # '[' -z 3364438 ']' 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # kill -0 3364438 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # uname 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3364438 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@960 -- # process_name=reactor_4 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@964 -- # '[' reactor_4 = sudo ']' 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3364438' 00:26:37.129 killing process with pid 3364438 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@973 -- # kill 3364438 00:26:37.129 10:44:17 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@978 -- # wait 3364438 00:26:37.387 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:26:37.387 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # nvmf_fini 00:26:37.387 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@264 -- # local dev 00:26:37.387 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@267 -- # remove_target_ns 00:26:37.387 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:37.387 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:37.387 10:44:18 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@268 -- # delete_main_bridge 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@130 -- # return 0 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@41 -- # _dev=0 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@41 -- # dev_map=() 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/setup.sh@284 -- # iptr 00:26:39.918 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@542 -- # iptables-save 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@542 -- # iptables-restore 00:26:39.919 00:26:39.919 real 0m19.770s 00:26:39.919 user 0m47.477s 00:26:39.919 sys 0m9.691s 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1130 -- # xtrace_disable 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:26:39.919 ************************************ 00:26:39.919 END TEST nvmf_target_disconnect 00:26:39.919 ************************************ 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # [[ tcp == \t\c\p ]] 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@31 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:39.919 ************************************ 00:26:39.919 START TEST nvmf_digest 00:26:39.919 ************************************ 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:39.919 * Looking for test storage... 00:26:39.919 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lcov --version 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@333 -- # local ver1 ver1_l 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@334 -- # local ver2 ver2_l 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # IFS=.-: 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@336 -- # read -ra ver1 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # IFS=.-: 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@337 -- # read -ra ver2 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@338 -- # local 'op=<' 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@340 -- # ver1_l=2 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@341 -- # ver2_l=1 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@344 -- # case "$op" in 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@345 -- # : 1 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v = 0 )) 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # decimal 1 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=1 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 1 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@365 -- # ver1[v]=1 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # decimal 2 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@353 -- # local d=2 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@355 -- # echo 2 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@366 -- # ver2[v]=2 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@368 -- # return 0 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:26:39.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.919 --rc genhtml_branch_coverage=1 00:26:39.919 --rc genhtml_function_coverage=1 00:26:39.919 --rc genhtml_legend=1 00:26:39.919 --rc geninfo_all_blocks=1 00:26:39.919 --rc geninfo_unexecuted_blocks=1 00:26:39.919 00:26:39.919 ' 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:26:39.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.919 --rc genhtml_branch_coverage=1 00:26:39.919 --rc genhtml_function_coverage=1 00:26:39.919 --rc genhtml_legend=1 00:26:39.919 --rc geninfo_all_blocks=1 00:26:39.919 --rc geninfo_unexecuted_blocks=1 00:26:39.919 00:26:39.919 ' 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:26:39.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.919 --rc genhtml_branch_coverage=1 00:26:39.919 --rc genhtml_function_coverage=1 00:26:39.919 --rc genhtml_legend=1 00:26:39.919 --rc geninfo_all_blocks=1 00:26:39.919 --rc geninfo_unexecuted_blocks=1 00:26:39.919 00:26:39.919 ' 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:26:39.919 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:26:39.919 --rc genhtml_branch_coverage=1 00:26:39.919 --rc genhtml_function_coverage=1 00:26:39.919 --rc genhtml_legend=1 00:26:39.919 --rc geninfo_all_blocks=1 00:26:39.919 --rc geninfo_unexecuted_blocks=1 00:26:39.919 00:26:39.919 ' 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@15 -- # shopt -s extglob 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:39.919 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@50 -- # : 0 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:26:39.920 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@54 -- # have_pci_nics=0 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # prepare_net_devs 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # local -g is_hw=no 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # remove_target_ns 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # xtrace_disable 00:26:39.920 10:44:20 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@131 -- # pci_devs=() 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@131 -- # local -a pci_devs 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@132 -- # pci_net_devs=() 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@133 -- # pci_drivers=() 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@133 -- # local -A pci_drivers 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@135 -- # net_devs=() 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@135 -- # local -ga net_devs 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@136 -- # e810=() 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@136 -- # local -ga e810 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@137 -- # x722=() 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@137 -- # local -ga x722 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@138 -- # mlx=() 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@138 -- # local -ga mlx 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:46.515 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:46.515 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:46.515 Found net devices under 0000:86:00.0: cvl_0_0 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # [[ up == up ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:46.515 Found net devices under 0000:86:00.1: cvl_0_1 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@262 -- # is_hw=yes 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@257 -- # create_target_ns 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@27 -- # local -gA dev_map 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@28 -- # local -g _dev 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # ips=() 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:26:46.515 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772161 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:26:46.516 10.0.0.1 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@11 -- # local val=167772162 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:26:46.516 10.0.0.2 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@38 -- # ping_ips 1 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:26:46.516 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:46.516 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.460 ms 00:26:46.516 00:26:46.516 --- 10.0.0.1 ping statistics --- 00:26:46.516 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.516 rtt min/avg/max/mdev = 0.460/0.460/0.460/0.000 ms 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=target0 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:26:46.516 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:26:46.517 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:46.517 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:26:46.517 00:26:46.517 --- 10.0.0.2 ping statistics --- 00:26:46.517 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:46.517 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # (( pair++ )) 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # return 0 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=initiator0 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=initiator1 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # return 1 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev= 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@169 -- # return 0 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev target0 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=target0 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # get_net_dev target1 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@107 -- # local dev=target1 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@109 -- # return 1 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@168 -- # dev= 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@169 -- # return 0 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:46.517 ************************************ 00:26:46.517 START TEST nvmf_digest_clean 00:26:46.517 ************************************ 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1129 -- # run_digest 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@726 -- # xtrace_disable 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@328 -- # nvmfpid=3369151 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@329 -- # waitforlisten 3369151 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3369151 ']' 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:46.517 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:46.517 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:46.517 [2024-11-20 10:44:26.569735] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:26:46.517 [2024-11-20 10:44:26.569779] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:46.517 [2024-11-20 10:44:26.645780] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.517 [2024-11-20 10:44:26.686420] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:46.517 [2024-11-20 10:44:26.686461] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:46.517 [2024-11-20 10:44:26.686468] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:46.517 [2024-11-20 10:44:26.686474] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:46.518 [2024-11-20 10:44:26.686479] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:46.518 [2024-11-20 10:44:26.687027] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@732 -- # xtrace_disable 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@563 -- # xtrace_disable 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:46.518 null0 00:26:46.518 [2024-11-20 10:44:26.837841] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:46.518 [2024-11-20 10:44:26.862035] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3369171 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3369171 /var/tmp/bperf.sock 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3369171 ']' 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:46.518 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:46.518 10:44:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:46.518 [2024-11-20 10:44:26.912774] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:26:46.518 [2024-11-20 10:44:26.912814] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3369171 ] 00:26:46.518 [2024-11-20 10:44:26.985920] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:46.518 [2024-11-20 10:44:27.027556] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:46.518 10:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:46.518 10:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:46.518 10:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:46.518 10:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:46.518 10:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:46.778 10:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:46.778 10:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:47.036 nvme0n1 00:26:47.294 10:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:47.294 10:44:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:47.294 Running I/O for 2 seconds... 00:26:49.161 24968.00 IOPS, 97.53 MiB/s [2024-11-20T09:44:30.150Z] 25460.00 IOPS, 99.45 MiB/s 00:26:49.419 Latency(us) 00:26:49.419 [2024-11-20T09:44:30.150Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.419 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:49.419 nvme0n1 : 2.04 24968.54 97.53 0.00 0.00 5020.29 2668.25 45438.29 00:26:49.419 [2024-11-20T09:44:30.150Z] =================================================================================================================== 00:26:49.419 [2024-11-20T09:44:30.150Z] Total : 24968.54 97.53 0.00 0.00 5020.29 2668.25 45438.29 00:26:49.419 { 00:26:49.419 "results": [ 00:26:49.419 { 00:26:49.419 "job": "nvme0n1", 00:26:49.419 "core_mask": "0x2", 00:26:49.419 "workload": "randread", 00:26:49.419 "status": "finished", 00:26:49.419 "queue_depth": 128, 00:26:49.419 "io_size": 4096, 00:26:49.419 "runtime": 2.044493, 00:26:49.419 "iops": 24968.5374320186, 00:26:49.419 "mibps": 97.53334934382265, 00:26:49.419 "io_failed": 0, 00:26:49.419 "io_timeout": 0, 00:26:49.419 "avg_latency_us": 5020.290654547354, 00:26:49.419 "min_latency_us": 2668.2514285714287, 00:26:49.419 "max_latency_us": 45438.293333333335 00:26:49.419 } 00:26:49.419 ], 00:26:49.419 "core_count": 1 00:26:49.419 } 00:26:49.419 10:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:49.419 10:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:49.419 10:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:49.419 10:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:49.419 | select(.opcode=="crc32c") 00:26:49.419 | "\(.module_name) \(.executed)"' 00:26:49.419 10:44:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:49.419 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:49.419 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:49.419 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:49.419 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:49.419 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3369171 00:26:49.419 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3369171 ']' 00:26:49.419 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3369171 00:26:49.419 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:49.419 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:49.419 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3369171 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3369171' 00:26:49.678 killing process with pid 3369171 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3369171 00:26:49.678 Received shutdown signal, test time was about 2.000000 seconds 00:26:49.678 00:26:49.678 Latency(us) 00:26:49.678 [2024-11-20T09:44:30.409Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:49.678 [2024-11-20T09:44:30.409Z] =================================================================================================================== 00:26:49.678 [2024-11-20T09:44:30.409Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3369171 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3369862 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3369862 /var/tmp/bperf.sock 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3369862 ']' 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:49.678 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:49.678 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:49.678 [2024-11-20 10:44:30.374532] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:26:49.678 [2024-11-20 10:44:30.374576] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3369862 ] 00:26:49.678 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:49.678 Zero copy mechanism will not be used. 00:26:49.936 [2024-11-20 10:44:30.449096] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:49.936 [2024-11-20 10:44:30.490502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:49.936 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:49.936 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:49.936 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:49.936 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:49.936 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:50.194 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.194 10:44:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:50.452 nvme0n1 00:26:50.452 10:44:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:50.452 10:44:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:50.452 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:50.452 Zero copy mechanism will not be used. 00:26:50.452 Running I/O for 2 seconds... 00:26:52.761 5684.00 IOPS, 710.50 MiB/s [2024-11-20T09:44:33.492Z] 5862.50 IOPS, 732.81 MiB/s 00:26:52.761 Latency(us) 00:26:52.761 [2024-11-20T09:44:33.492Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.761 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:52.761 nvme0n1 : 2.00 5862.01 732.75 0.00 0.00 2727.08 674.86 9362.29 00:26:52.761 [2024-11-20T09:44:33.492Z] =================================================================================================================== 00:26:52.761 [2024-11-20T09:44:33.492Z] Total : 5862.01 732.75 0.00 0.00 2727.08 674.86 9362.29 00:26:52.761 { 00:26:52.761 "results": [ 00:26:52.761 { 00:26:52.761 "job": "nvme0n1", 00:26:52.761 "core_mask": "0x2", 00:26:52.761 "workload": "randread", 00:26:52.761 "status": "finished", 00:26:52.761 "queue_depth": 16, 00:26:52.761 "io_size": 131072, 00:26:52.761 "runtime": 2.003407, 00:26:52.761 "iops": 5862.014059050408, 00:26:52.761 "mibps": 732.751757381301, 00:26:52.761 "io_failed": 0, 00:26:52.761 "io_timeout": 0, 00:26:52.761 "avg_latency_us": 2727.0756247567147, 00:26:52.761 "min_latency_us": 674.8647619047618, 00:26:52.761 "max_latency_us": 9362.285714285714 00:26:52.761 } 00:26:52.762 ], 00:26:52.762 "core_count": 1 00:26:52.762 } 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:52.762 | select(.opcode=="crc32c") 00:26:52.762 | "\(.module_name) \(.executed)"' 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3369862 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3369862 ']' 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3369862 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3369862 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3369862' 00:26:52.762 killing process with pid 3369862 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3369862 00:26:52.762 Received shutdown signal, test time was about 2.000000 seconds 00:26:52.762 00:26:52.762 Latency(us) 00:26:52.762 [2024-11-20T09:44:33.493Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:52.762 [2024-11-20T09:44:33.493Z] =================================================================================================================== 00:26:52.762 [2024-11-20T09:44:33.493Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:52.762 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3369862 00:26:53.021 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:53.021 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:53.021 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:53.021 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:53.021 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:53.021 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:53.021 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:53.021 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3370333 00:26:53.021 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3370333 /var/tmp/bperf.sock 00:26:53.021 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:53.021 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3370333 ']' 00:26:53.021 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:53.021 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:53.021 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:53.021 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:53.021 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:53.021 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:53.021 [2024-11-20 10:44:33.616262] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:26:53.021 [2024-11-20 10:44:33.616322] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3370333 ] 00:26:53.021 [2024-11-20 10:44:33.692299] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:53.021 [2024-11-20 10:44:33.728906] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:53.280 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:53.280 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:53.280 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:53.280 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:53.280 10:44:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:53.538 10:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:53.538 10:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:53.795 nvme0n1 00:26:53.795 10:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:53.795 10:44:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:54.053 Running I/O for 2 seconds... 00:26:55.920 27214.00 IOPS, 106.30 MiB/s [2024-11-20T09:44:36.651Z] 27315.00 IOPS, 106.70 MiB/s 00:26:55.920 Latency(us) 00:26:55.920 [2024-11-20T09:44:36.651Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:55.920 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:55.920 nvme0n1 : 2.01 27316.00 106.70 0.00 0.00 4678.41 3495.25 8488.47 00:26:55.920 [2024-11-20T09:44:36.651Z] =================================================================================================================== 00:26:55.920 [2024-11-20T09:44:36.651Z] Total : 27316.00 106.70 0.00 0.00 4678.41 3495.25 8488.47 00:26:55.920 { 00:26:55.920 "results": [ 00:26:55.920 { 00:26:55.920 "job": "nvme0n1", 00:26:55.920 "core_mask": "0x2", 00:26:55.920 "workload": "randwrite", 00:26:55.920 "status": "finished", 00:26:55.920 "queue_depth": 128, 00:26:55.920 "io_size": 4096, 00:26:55.920 "runtime": 2.005784, 00:26:55.920 "iops": 27316.00212186357, 00:26:55.920 "mibps": 106.70313328852957, 00:26:55.920 "io_failed": 0, 00:26:55.920 "io_timeout": 0, 00:26:55.920 "avg_latency_us": 4678.409983330292, 00:26:55.920 "min_latency_us": 3495.2533333333336, 00:26:55.920 "max_latency_us": 8488.47238095238 00:26:55.920 } 00:26:55.920 ], 00:26:55.920 "core_count": 1 00:26:55.920 } 00:26:55.920 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:55.920 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:55.920 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:55.920 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:55.920 | select(.opcode=="crc32c") 00:26:55.920 | "\(.module_name) \(.executed)"' 00:26:55.920 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:56.178 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:56.178 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:56.178 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:56.178 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:56.178 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3370333 00:26:56.178 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3370333 ']' 00:26:56.178 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3370333 00:26:56.178 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:56.178 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:56.178 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3370333 00:26:56.178 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:56.178 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:56.178 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3370333' 00:26:56.178 killing process with pid 3370333 00:26:56.178 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3370333 00:26:56.178 Received shutdown signal, test time was about 2.000000 seconds 00:26:56.178 00:26:56.178 Latency(us) 00:26:56.178 [2024-11-20T09:44:36.909Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:56.178 [2024-11-20T09:44:36.909Z] =================================================================================================================== 00:26:56.178 [2024-11-20T09:44:36.909Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:56.178 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3370333 00:26:56.436 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:56.437 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:56.437 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:56.437 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:56.437 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:56.437 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:56.437 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:56.437 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=3370812 00:26:56.437 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 3370812 /var/tmp/bperf.sock 00:26:56.437 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:56.437 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@835 -- # '[' -z 3370812 ']' 00:26:56.437 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:56.437 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:26:56.437 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:56.437 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:56.437 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:26:56.437 10:44:36 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:56.437 [2024-11-20 10:44:37.026622] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:26:56.437 [2024-11-20 10:44:37.026673] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3370812 ] 00:26:56.437 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:56.437 Zero copy mechanism will not be used. 00:26:56.437 [2024-11-20 10:44:37.101960] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:56.437 [2024-11-20 10:44:37.143596] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:26:56.695 10:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:26:56.695 10:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@868 -- # return 0 00:26:56.695 10:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:56.695 10:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:56.695 10:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:56.954 10:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:56.954 10:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:57.212 nvme0n1 00:26:57.212 10:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:57.212 10:44:37 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:57.470 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:57.470 Zero copy mechanism will not be used. 00:26:57.470 Running I/O for 2 seconds... 00:26:59.336 6974.00 IOPS, 871.75 MiB/s [2024-11-20T09:44:40.067Z] 6411.00 IOPS, 801.38 MiB/s 00:26:59.336 Latency(us) 00:26:59.336 [2024-11-20T09:44:40.067Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.336 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:59.336 nvme0n1 : 2.00 6410.59 801.32 0.00 0.00 2491.76 1560.38 7614.66 00:26:59.336 [2024-11-20T09:44:40.067Z] =================================================================================================================== 00:26:59.336 [2024-11-20T09:44:40.067Z] Total : 6410.59 801.32 0.00 0.00 2491.76 1560.38 7614.66 00:26:59.336 { 00:26:59.336 "results": [ 00:26:59.336 { 00:26:59.336 "job": "nvme0n1", 00:26:59.336 "core_mask": "0x2", 00:26:59.336 "workload": "randwrite", 00:26:59.336 "status": "finished", 00:26:59.336 "queue_depth": 16, 00:26:59.336 "io_size": 131072, 00:26:59.336 "runtime": 2.003403, 00:26:59.336 "iops": 6410.592377070415, 00:26:59.336 "mibps": 801.3240471338019, 00:26:59.336 "io_failed": 0, 00:26:59.336 "io_timeout": 0, 00:26:59.336 "avg_latency_us": 2491.7633278087374, 00:26:59.336 "min_latency_us": 1560.3809523809523, 00:26:59.336 "max_latency_us": 7614.659047619048 00:26:59.336 } 00:26:59.336 ], 00:26:59.336 "core_count": 1 00:26:59.336 } 00:26:59.336 10:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:59.336 10:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:59.336 10:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:59.336 10:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:59.336 | select(.opcode=="crc32c") 00:26:59.336 | "\(.module_name) \(.executed)"' 00:26:59.336 10:44:39 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:59.599 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:59.599 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:59.599 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:59.599 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:59.599 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 3370812 00:26:59.599 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3370812 ']' 00:26:59.599 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3370812 00:26:59.599 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:59.599 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:59.599 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3370812 00:26:59.599 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:26:59.599 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:26:59.599 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3370812' 00:26:59.599 killing process with pid 3370812 00:26:59.599 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3370812 00:26:59.599 Received shutdown signal, test time was about 2.000000 seconds 00:26:59.599 00:26:59.599 Latency(us) 00:26:59.599 [2024-11-20T09:44:40.330Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:59.599 [2024-11-20T09:44:40.330Z] =================================================================================================================== 00:26:59.599 [2024-11-20T09:44:40.330Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:59.599 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3370812 00:26:59.858 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 3369151 00:26:59.858 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # '[' -z 3369151 ']' 00:26:59.858 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # kill -0 3369151 00:26:59.858 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # uname 00:26:59.858 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:26:59.858 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3369151 00:26:59.858 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:26:59.858 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:26:59.858 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3369151' 00:26:59.858 killing process with pid 3369151 00:26:59.858 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@973 -- # kill 3369151 00:26:59.858 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@978 -- # wait 3369151 00:27:00.116 00:27:00.116 real 0m14.096s 00:27:00.116 user 0m26.938s 00:27:00.116 sys 0m4.624s 00:27:00.116 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:00.116 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:27:00.116 ************************************ 00:27:00.116 END TEST nvmf_digest_clean 00:27:00.116 ************************************ 00:27:00.116 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:27:00.116 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:27:00.116 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:00.116 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:00.116 ************************************ 00:27:00.116 START TEST nvmf_digest_error 00:27:00.116 ************************************ 00:27:00.117 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1129 -- # run_digest_error 00:27:00.117 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:27:00.117 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:00.117 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:00.117 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:00.117 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@328 -- # nvmfpid=3371531 00:27:00.117 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@329 -- # waitforlisten 3371531 00:27:00.117 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:27:00.117 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3371531 ']' 00:27:00.117 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:00.117 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:00.117 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:00.117 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:00.117 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:00.117 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:00.117 [2024-11-20 10:44:40.725515] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:27:00.117 [2024-11-20 10:44:40.725556] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:00.117 [2024-11-20 10:44:40.803307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.117 [2024-11-20 10:44:40.843366] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:00.117 [2024-11-20 10:44:40.843403] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:00.117 [2024-11-20 10:44:40.843410] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:00.117 [2024-11-20 10:44:40.843416] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:00.117 [2024-11-20 10:44:40.843421] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:00.117 [2024-11-20 10:44:40.843970] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:00.375 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:00.375 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:00.375 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:00.375 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:00.375 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:00.375 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:00.375 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:27:00.375 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.375 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:00.375 [2024-11-20 10:44:40.908418] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:27:00.375 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.375 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:27:00.375 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:27:00.375 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.375 10:44:40 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:00.375 null0 00:27:00.375 [2024-11-20 10:44:40.997521] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:00.375 [2024-11-20 10:44:41.021715] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:00.375 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.375 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:27:00.375 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:00.375 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:00.375 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:00.375 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:00.375 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3371550 00:27:00.375 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3371550 /var/tmp/bperf.sock 00:27:00.375 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:27:00.375 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3371550 ']' 00:27:00.375 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:00.375 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:00.375 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:00.375 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:00.375 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:00.375 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:00.375 [2024-11-20 10:44:41.075660] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:27:00.375 [2024-11-20 10:44:41.075702] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3371550 ] 00:27:00.633 [2024-11-20 10:44:41.150602] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:00.633 [2024-11-20 10:44:41.191979] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:00.633 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:00.633 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:00.633 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:00.633 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:00.891 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:00.891 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:00.891 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:00.891 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:00.891 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:00.891 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:01.149 nvme0n1 00:27:01.149 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:01.149 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:01.149 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:01.149 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:01.149 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:01.149 10:44:41 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:01.149 Running I/O for 2 seconds... 00:27:01.149 [2024-11-20 10:44:41.863939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.149 [2024-11-20 10:44:41.863971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8633 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.149 [2024-11-20 10:44:41.863982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.149 [2024-11-20 10:44:41.874731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.149 [2024-11-20 10:44:41.874757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:2759 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.149 [2024-11-20 10:44:41.874767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:41.884251] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:41.884274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:17740 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:41.884282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:41.891992] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:41.892019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:5199 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:41.892028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:41.903946] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:41.903969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:16062 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:41.903977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:41.915293] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:41.915315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:11291 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:41.915323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:41.923935] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:41.923957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:13854 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:41.923966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:41.933785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:41.933807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:4505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:41.933815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:41.942569] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:41.942590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:15379 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:41.942599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:41.952178] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:41.952200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:613 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:41.952215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:41.961438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:41.961458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:18382 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:41.961467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:41.970717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:41.970739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:41.970747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:41.979139] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:41.979161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:41.979169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:41.989659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:41.989682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:22279 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:41.989690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:42.002066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:42.002087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:12986 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:42.002096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:42.011612] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:42.011633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:15626 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:42.011641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:42.019432] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:42.019454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:119 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:42.019463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:42.030158] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:42.030180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15509 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:42.030188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:42.037897] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:42.037918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:13810 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:42.037926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:42.048685] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:42.048706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:2007 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:42.048714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:42.060247] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:42.060268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19750 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:42.060280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:42.069420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:42.069441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:18641 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:42.069449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:42.081144] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:42.081166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15670 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:42.081174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:42.091076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:42.091097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:10439 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.408 [2024-11-20 10:44:42.091105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.408 [2024-11-20 10:44:42.099492] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.408 [2024-11-20 10:44:42.099512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9570 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.409 [2024-11-20 10:44:42.099520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.409 [2024-11-20 10:44:42.111609] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.409 [2024-11-20 10:44:42.111630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:19121 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.409 [2024-11-20 10:44:42.111638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.409 [2024-11-20 10:44:42.119485] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.409 [2024-11-20 10:44:42.119507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:13045 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.409 [2024-11-20 10:44:42.119516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.409 [2024-11-20 10:44:42.129410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.409 [2024-11-20 10:44:42.129431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6311 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.409 [2024-11-20 10:44:42.129439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.667 [2024-11-20 10:44:42.139547] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.667 [2024-11-20 10:44:42.139568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:5684 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.667 [2024-11-20 10:44:42.139577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.147472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.147493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7254 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.147502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.159347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.159368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:2639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.159377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.170861] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.170883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:21103 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.170891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.181772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.181794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:8274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.181802] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.190067] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.190089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.190097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.200894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.200915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:9491 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.200924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.210873] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.210893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:11598 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.210901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.219210] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.219230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:13859 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.219238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.229974] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.229994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:9074 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.230008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.238155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.238176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:9664 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.238184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.249518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.249539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.249547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.260374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.260396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:10688 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.260404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.271591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.271612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:6943 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.271621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.282785] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.282806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:20869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.282814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.292428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.292450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:16361 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.292458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.300387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.300407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12015 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.300417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.309227] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.309248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:8960 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.309257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.319087] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.319113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:5700 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.319121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.328826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.328848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10477 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.328856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.337297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.337319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:23849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.337327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.346711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.346734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:15145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.346742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.358943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.358965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:2187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.358973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.371028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.371050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:24979 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.371059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.380297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.380318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2452 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.380326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.668 [2024-11-20 10:44:42.389780] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.668 [2024-11-20 10:44:42.389802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:9949 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.668 [2024-11-20 10:44:42.389811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.927 [2024-11-20 10:44:42.400374] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.927 [2024-11-20 10:44:42.400394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:16793 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.927 [2024-11-20 10:44:42.400402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.927 [2024-11-20 10:44:42.409169] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.927 [2024-11-20 10:44:42.409191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:25593 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.927 [2024-11-20 10:44:42.409199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.927 [2024-11-20 10:44:42.420085] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.927 [2024-11-20 10:44:42.420108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:11530 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.927 [2024-11-20 10:44:42.420117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.927 [2024-11-20 10:44:42.429161] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.927 [2024-11-20 10:44:42.429184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:2703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.927 [2024-11-20 10:44:42.429192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.927 [2024-11-20 10:44:42.441064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.927 [2024-11-20 10:44:42.441086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:20830 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.927 [2024-11-20 10:44:42.441094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.927 [2024-11-20 10:44:42.453388] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.927 [2024-11-20 10:44:42.453410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:6762 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.927 [2024-11-20 10:44:42.453418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.927 [2024-11-20 10:44:42.465645] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.927 [2024-11-20 10:44:42.465666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:1551 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.927 [2024-11-20 10:44:42.465675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.927 [2024-11-20 10:44:42.473985] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.927 [2024-11-20 10:44:42.474006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:16869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.927 [2024-11-20 10:44:42.474014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.927 [2024-11-20 10:44:42.483739] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.927 [2024-11-20 10:44:42.483759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:1146 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.927 [2024-11-20 10:44:42.483768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.927 [2024-11-20 10:44:42.493173] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.927 [2024-11-20 10:44:42.493195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12640 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.927 [2024-11-20 10:44:42.493213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.927 [2024-11-20 10:44:42.502347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.927 [2024-11-20 10:44:42.502368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:7901 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.927 [2024-11-20 10:44:42.502376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.927 [2024-11-20 10:44:42.510769] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.927 [2024-11-20 10:44:42.510789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:9647 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.927 [2024-11-20 10:44:42.510797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.928 [2024-11-20 10:44:42.520136] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.928 [2024-11-20 10:44:42.520157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:3644 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.928 [2024-11-20 10:44:42.520165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.928 [2024-11-20 10:44:42.530301] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.928 [2024-11-20 10:44:42.530322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:14728 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.928 [2024-11-20 10:44:42.530329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.928 [2024-11-20 10:44:42.538655] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.928 [2024-11-20 10:44:42.538675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:3930 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.928 [2024-11-20 10:44:42.538683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.928 [2024-11-20 10:44:42.549827] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.928 [2024-11-20 10:44:42.549848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:3571 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.928 [2024-11-20 10:44:42.549856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.928 [2024-11-20 10:44:42.560550] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.928 [2024-11-20 10:44:42.560569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:21156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.928 [2024-11-20 10:44:42.560577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.928 [2024-11-20 10:44:42.570153] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.928 [2024-11-20 10:44:42.570173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:5565 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.928 [2024-11-20 10:44:42.570182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.928 [2024-11-20 10:44:42.578539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.928 [2024-11-20 10:44:42.578563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2848 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.928 [2024-11-20 10:44:42.578571] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.928 [2024-11-20 10:44:42.589948] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.928 [2024-11-20 10:44:42.589968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:15332 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.928 [2024-11-20 10:44:42.589976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.928 [2024-11-20 10:44:42.599886] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.928 [2024-11-20 10:44:42.599907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:2145 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.928 [2024-11-20 10:44:42.599916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.928 [2024-11-20 10:44:42.608746] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.928 [2024-11-20 10:44:42.608767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:17674 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.928 [2024-11-20 10:44:42.608775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.928 [2024-11-20 10:44:42.618076] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.928 [2024-11-20 10:44:42.618097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:18292 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.928 [2024-11-20 10:44:42.618106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.928 [2024-11-20 10:44:42.627545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.928 [2024-11-20 10:44:42.627566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:14308 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.928 [2024-11-20 10:44:42.627574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.928 [2024-11-20 10:44:42.636545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.928 [2024-11-20 10:44:42.636566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:17106 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.928 [2024-11-20 10:44:42.636574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:01.928 [2024-11-20 10:44:42.646286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:01.928 [2024-11-20 10:44:42.646307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:01.928 [2024-11-20 10:44:42.646315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.186 [2024-11-20 10:44:42.654765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.186 [2024-11-20 10:44:42.654786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:13645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.186 [2024-11-20 10:44:42.654797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.186 [2024-11-20 10:44:42.665006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.186 [2024-11-20 10:44:42.665027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:19337 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.186 [2024-11-20 10:44:42.665035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.186 [2024-11-20 10:44:42.675652] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.186 [2024-11-20 10:44:42.675673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:7175 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.675681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.685016] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.685037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13835 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.685045] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.694361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.694382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:1877 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.694391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.703791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.703812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.703820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.712399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.712419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:15039 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.712428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.723545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.723567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24774 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.723575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.733194] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.733222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11235 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.733230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.742364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.742388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.742396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.751717] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.751738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:9323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.751745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.764515] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.764536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:24744 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.764544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.772434] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.772455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:21245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.772464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.782635] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.782656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:12255 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.782664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.793242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.793263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:13097 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.793271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.804580] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.804602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:12203 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.804611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.813484] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.813506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:91 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.813514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.823794] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.823815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:5012 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.823823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.832551] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.832572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:13258 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.832580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.841735] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.841757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:11261 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.841765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 25582.00 IOPS, 99.93 MiB/s [2024-11-20T09:44:42.918Z] [2024-11-20 10:44:42.854190] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.854220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24933 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.854229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.863720] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.863743] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20128 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.863751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.871764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.871786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:17356 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.871795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.881692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.881713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10743 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.881721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.891439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.891460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:11058 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.891468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.900804] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.900825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:16307 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.900833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.187 [2024-11-20 10:44:42.910229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.187 [2024-11-20 10:44:42.910251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:9054 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.187 [2024-11-20 10:44:42.910263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.446 [2024-11-20 10:44:42.919953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.446 [2024-11-20 10:44:42.919974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:15909 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.446 [2024-11-20 10:44:42.919983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.446 [2024-11-20 10:44:42.928695] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.446 [2024-11-20 10:44:42.928718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:24715 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.446 [2024-11-20 10:44:42.928725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.446 [2024-11-20 10:44:42.939122] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.446 [2024-11-20 10:44:42.939144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:18611 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.446 [2024-11-20 10:44:42.939153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.446 [2024-11-20 10:44:42.946997] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.446 [2024-11-20 10:44:42.947019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:7009 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.446 [2024-11-20 10:44:42.947027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.446 [2024-11-20 10:44:42.958468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.446 [2024-11-20 10:44:42.958490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1860 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.446 [2024-11-20 10:44:42.958498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.446 [2024-11-20 10:44:42.969518] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.446 [2024-11-20 10:44:42.969540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:3046 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.446 [2024-11-20 10:44:42.969549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.446 [2024-11-20 10:44:42.979318] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.446 [2024-11-20 10:44:42.979340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:10418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.446 [2024-11-20 10:44:42.979348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.446 [2024-11-20 10:44:42.987407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.446 [2024-11-20 10:44:42.987429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:22373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.446 [2024-11-20 10:44:42.987438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.446 [2024-11-20 10:44:42.997208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.446 [2024-11-20 10:44:42.997229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:15421 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.447 [2024-11-20 10:44:42.997237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.447 [2024-11-20 10:44:43.006364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.447 [2024-11-20 10:44:43.006386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:10602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.447 [2024-11-20 10:44:43.006394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.447 [2024-11-20 10:44:43.016276] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.447 [2024-11-20 10:44:43.016297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12789 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.447 [2024-11-20 10:44:43.016305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.447 [2024-11-20 10:44:43.025483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.447 [2024-11-20 10:44:43.025505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:20393 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.447 [2024-11-20 10:44:43.025513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.447 [2024-11-20 10:44:43.035162] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.447 [2024-11-20 10:44:43.035184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:6703 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.447 [2024-11-20 10:44:43.035192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.447 [2024-11-20 10:44:43.043579] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.447 [2024-11-20 10:44:43.043603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6384 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.447 [2024-11-20 10:44:43.043611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.447 [2024-11-20 10:44:43.055196] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.447 [2024-11-20 10:44:43.055225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:16367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.447 [2024-11-20 10:44:43.055250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.447 [2024-11-20 10:44:43.066950] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.447 [2024-11-20 10:44:43.066971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:6938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.447 [2024-11-20 10:44:43.066980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.447 [2024-11-20 10:44:43.075457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.447 [2024-11-20 10:44:43.075479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:13921 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.447 [2024-11-20 10:44:43.075491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.447 [2024-11-20 10:44:43.086565] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.447 [2024-11-20 10:44:43.086587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18430 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.447 [2024-11-20 10:44:43.086595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.447 [2024-11-20 10:44:43.095893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.447 [2024-11-20 10:44:43.095914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:323 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.447 [2024-11-20 10:44:43.095922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.447 [2024-11-20 10:44:43.106439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.447 [2024-11-20 10:44:43.106461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:24923 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.447 [2024-11-20 10:44:43.106469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.447 [2024-11-20 10:44:43.115159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.447 [2024-11-20 10:44:43.115182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18514 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.447 [2024-11-20 10:44:43.115190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.447 [2024-11-20 10:44:43.125379] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.447 [2024-11-20 10:44:43.125400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:7116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.447 [2024-11-20 10:44:43.125409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.447 [2024-11-20 10:44:43.133599] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.447 [2024-11-20 10:44:43.133621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:25028 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.447 [2024-11-20 10:44:43.133629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.447 [2024-11-20 10:44:43.145361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.447 [2024-11-20 10:44:43.145383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:6330 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.447 [2024-11-20 10:44:43.145391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.447 [2024-11-20 10:44:43.154325] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.447 [2024-11-20 10:44:43.154347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:22445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.447 [2024-11-20 10:44:43.154355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.447 [2024-11-20 10:44:43.163941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.447 [2024-11-20 10:44:43.163968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:12099 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.447 [2024-11-20 10:44:43.163976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.175802] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.175825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:2166 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.175833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.185297] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.185320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:5173 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.185328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:87 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.194693] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.194715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:23117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.194724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.203607] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.203629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:20729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.203637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.213653] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.213676] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:12685 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.213685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.223960] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.223982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:4479 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.223990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.234799] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.234821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:14908 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.234830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.244113] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.244136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:9525 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.244144] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.252866] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.252887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12397 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.252895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.261840] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.261861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.261869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.270394] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.270415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:8120 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.270424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.279889] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.279911] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.279919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.291074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.291095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:21928 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.291103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.299959] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.299980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:5543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.299988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.311941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.311962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3883 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.311971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.324249] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.324270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:22225 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.324279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.335952] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.335974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20738 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.335985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.344636] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.344657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:9044 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.344665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.356728] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.356749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:22321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.356757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.364665] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.364686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:24367 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.364694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.376589] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.376611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:2927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.376619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.386005] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.386029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:11373 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.386037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.396474] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.396496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:14998 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.706 [2024-11-20 10:44:43.396505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.706 [2024-11-20 10:44:43.403887] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.706 [2024-11-20 10:44:43.403908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:20498 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.707 [2024-11-20 10:44:43.403916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.707 [2024-11-20 10:44:43.414300] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.707 [2024-11-20 10:44:43.414322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:12562 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.707 [2024-11-20 10:44:43.414330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.707 [2024-11-20 10:44:43.423796] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.707 [2024-11-20 10:44:43.423821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13839 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.707 [2024-11-20 10:44:43.423830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.707 [2024-11-20 10:44:43.432362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.707 [2024-11-20 10:44:43.432384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:21076 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.707 [2024-11-20 10:44:43.432392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:50 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.965 [2024-11-20 10:44:43.444755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.965 [2024-11-20 10:44:43.444779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:6193 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.965 [2024-11-20 10:44:43.444787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.965 [2024-11-20 10:44:43.457382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.965 [2024-11-20 10:44:43.457406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:18239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.965 [2024-11-20 10:44:43.457415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.965 [2024-11-20 10:44:43.465632] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.965 [2024-11-20 10:44:43.465654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:18445 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.965 [2024-11-20 10:44:43.465662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.965 [2024-11-20 10:44:43.476242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.965 [2024-11-20 10:44:43.476263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.965 [2024-11-20 10:44:43.476271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.965 [2024-11-20 10:44:43.485807] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.965 [2024-11-20 10:44:43.485828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15347 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.965 [2024-11-20 10:44:43.485836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.965 [2024-11-20 10:44:43.495990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.965 [2024-11-20 10:44:43.496011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:632 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.965 [2024-11-20 10:44:43.496019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.965 [2024-11-20 10:44:43.506053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.965 [2024-11-20 10:44:43.506074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19171 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.965 [2024-11-20 10:44:43.506082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.965 [2024-11-20 10:44:43.514852] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.965 [2024-11-20 10:44:43.514873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:13756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.965 [2024-11-20 10:44:43.514881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.965 [2024-11-20 10:44:43.525712] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.965 [2024-11-20 10:44:43.525733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13170 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.965 [2024-11-20 10:44:43.525741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.965 [2024-11-20 10:44:43.536595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.965 [2024-11-20 10:44:43.536617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:15475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.965 [2024-11-20 10:44:43.536625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.965 [2024-11-20 10:44:43.547397] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.965 [2024-11-20 10:44:43.547418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.965 [2024-11-20 10:44:43.547426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.965 [2024-11-20 10:44:43.560651] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.965 [2024-11-20 10:44:43.560672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:5198 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.965 [2024-11-20 10:44:43.560681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.965 [2024-11-20 10:44:43.568941] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.965 [2024-11-20 10:44:43.568962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:7283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.965 [2024-11-20 10:44:43.568971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.965 [2024-11-20 10:44:43.581322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.965 [2024-11-20 10:44:43.581343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:17938 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.965 [2024-11-20 10:44:43.581352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.965 [2024-11-20 10:44:43.589832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.965 [2024-11-20 10:44:43.589853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:12427 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.965 [2024-11-20 10:44:43.589861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.965 [2024-11-20 10:44:43.601656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.966 [2024-11-20 10:44:43.601680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18695 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.966 [2024-11-20 10:44:43.601688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.966 [2024-11-20 10:44:43.614044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.966 [2024-11-20 10:44:43.614064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:16494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.966 [2024-11-20 10:44:43.614072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:22 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.966 [2024-11-20 10:44:43.626567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.966 [2024-11-20 10:44:43.626588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:23912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.966 [2024-11-20 10:44:43.626595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.966 [2024-11-20 10:44:43.638674] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.966 [2024-11-20 10:44:43.638695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:15819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.966 [2024-11-20 10:44:43.638703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.966 [2024-11-20 10:44:43.650506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.966 [2024-11-20 10:44:43.650527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:20948 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.966 [2024-11-20 10:44:43.650535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.966 [2024-11-20 10:44:43.659557] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.966 [2024-11-20 10:44:43.659578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:12940 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.966 [2024-11-20 10:44:43.659586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.966 [2024-11-20 10:44:43.670623] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.966 [2024-11-20 10:44:43.670644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3002 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.966 [2024-11-20 10:44:43.670653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.966 [2024-11-20 10:44:43.679659] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.966 [2024-11-20 10:44:43.679680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:5315 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.966 [2024-11-20 10:44:43.679688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:02.966 [2024-11-20 10:44:43.688783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:02.966 [2024-11-20 10:44:43.688804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:7887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:02.966 [2024-11-20 10:44:43.688812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.224 [2024-11-20 10:44:43.698259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:03.224 [2024-11-20 10:44:43.698281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:8610 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.224 [2024-11-20 10:44:43.698290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.224 [2024-11-20 10:44:43.708341] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:03.224 [2024-11-20 10:44:43.708363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:18298 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.224 [2024-11-20 10:44:43.708372] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.224 [2024-11-20 10:44:43.717882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:03.224 [2024-11-20 10:44:43.717904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:18274 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.224 [2024-11-20 10:44:43.717913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.224 [2024-11-20 10:44:43.727326] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:03.224 [2024-11-20 10:44:43.727347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:15429 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.224 [2024-11-20 10:44:43.727355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.224 [2024-11-20 10:44:43.736507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:03.224 [2024-11-20 10:44:43.736528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:9784 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.224 [2024-11-20 10:44:43.736537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.224 [2024-11-20 10:44:43.745497] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:03.224 [2024-11-20 10:44:43.745519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:3661 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.224 [2024-11-20 10:44:43.745527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.224 [2024-11-20 10:44:43.754163] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:03.224 [2024-11-20 10:44:43.754185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:8263 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.224 [2024-11-20 10:44:43.754193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.224 [2024-11-20 10:44:43.763765] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:03.224 [2024-11-20 10:44:43.763786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:11441 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.224 [2024-11-20 10:44:43.763794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.224 [2024-11-20 10:44:43.771740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:03.224 [2024-11-20 10:44:43.771762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:14472 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.224 [2024-11-20 10:44:43.771774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.224 [2024-11-20 10:44:43.781452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:03.224 [2024-11-20 10:44:43.781473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21252 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.224 [2024-11-20 10:44:43.781482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.224 [2024-11-20 10:44:43.790524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:03.224 [2024-11-20 10:44:43.790544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.224 [2024-11-20 10:44:43.790552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.224 [2024-11-20 10:44:43.800009] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:03.224 [2024-11-20 10:44:43.800031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:9918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.224 [2024-11-20 10:44:43.800039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.224 [2024-11-20 10:44:43.810138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:03.224 [2024-11-20 10:44:43.810159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:6775 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.224 [2024-11-20 10:44:43.810168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.224 [2024-11-20 10:44:43.818906] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:03.224 [2024-11-20 10:44:43.818929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:21413 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.224 [2024-11-20 10:44:43.818937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.225 [2024-11-20 10:44:43.829846] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:03.225 [2024-11-20 10:44:43.829866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:10963 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.225 [2024-11-20 10:44:43.829874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.225 [2024-11-20 10:44:43.838116] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:03.225 [2024-11-20 10:44:43.838137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:17224 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.225 [2024-11-20 10:44:43.838145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.225 25612.50 IOPS, 100.05 MiB/s [2024-11-20T09:44:43.956Z] [2024-11-20 10:44:43.849791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x2441d70) 00:27:03.225 [2024-11-20 10:44:43.849812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:9630 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:03.225 [2024-11-20 10:44:43.849820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:03.225 00:27:03.225 Latency(us) 00:27:03.225 [2024-11-20T09:44:43.956Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.225 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:27:03.225 nvme0n1 : 2.05 25109.54 98.08 0.00 0.00 4990.60 2590.23 47435.58 00:27:03.225 [2024-11-20T09:44:43.956Z] =================================================================================================================== 00:27:03.225 [2024-11-20T09:44:43.956Z] Total : 25109.54 98.08 0.00 0.00 4990.60 2590.23 47435.58 00:27:03.225 { 00:27:03.225 "results": [ 00:27:03.225 { 00:27:03.225 "job": "nvme0n1", 00:27:03.225 "core_mask": "0x2", 00:27:03.225 "workload": "randread", 00:27:03.225 "status": "finished", 00:27:03.225 "queue_depth": 128, 00:27:03.225 "io_size": 4096, 00:27:03.225 "runtime": 2.045159, 00:27:03.225 "iops": 25109.539160524928, 00:27:03.225 "mibps": 98.0841373458005, 00:27:03.225 "io_failed": 0, 00:27:03.225 "io_timeout": 0, 00:27:03.225 "avg_latency_us": 4990.59967504101, 00:27:03.225 "min_latency_us": 2590.232380952381, 00:27:03.225 "max_latency_us": 47435.58095238095 00:27:03.225 } 00:27:03.225 ], 00:27:03.225 "core_count": 1 00:27:03.225 } 00:27:03.225 10:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:03.225 10:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:03.225 10:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:03.225 | .driver_specific 00:27:03.225 | .nvme_error 00:27:03.225 | .status_code 00:27:03.225 | .command_transient_transport_error' 00:27:03.225 10:44:43 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:03.482 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 201 > 0 )) 00:27:03.482 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3371550 00:27:03.482 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3371550 ']' 00:27:03.482 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3371550 00:27:03.482 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:03.482 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:03.482 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3371550 00:27:03.482 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:03.482 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:03.482 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3371550' 00:27:03.482 killing process with pid 3371550 00:27:03.482 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3371550 00:27:03.482 Received shutdown signal, test time was about 2.000000 seconds 00:27:03.482 00:27:03.482 Latency(us) 00:27:03.482 [2024-11-20T09:44:44.213Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:03.482 [2024-11-20T09:44:44.213Z] =================================================================================================================== 00:27:03.482 [2024-11-20T09:44:44.213Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:03.482 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3371550 00:27:03.740 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:27:03.740 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:03.740 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:27:03.740 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:03.740 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:03.740 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3372042 00:27:03.740 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3372042 /var/tmp/bperf.sock 00:27:03.740 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:27:03.740 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3372042 ']' 00:27:03.740 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:03.740 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:03.740 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:03.740 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:03.740 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:03.740 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:03.740 [2024-11-20 10:44:44.366772] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:27:03.740 [2024-11-20 10:44:44.366822] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3372042 ] 00:27:03.740 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:03.740 Zero copy mechanism will not be used. 00:27:03.740 [2024-11-20 10:44:44.443836] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:03.998 [2024-11-20 10:44:44.487231] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.998 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:03.998 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:03.998 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:03.998 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:04.256 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:04.256 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.256 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.256 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.256 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.256 10:44:44 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:04.515 nvme0n1 00:27:04.515 10:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:04.515 10:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:04.515 10:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:04.515 10:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:04.515 10:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:04.515 10:44:45 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:04.515 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:04.515 Zero copy mechanism will not be used. 00:27:04.515 Running I/O for 2 seconds... 00:27:04.515 [2024-11-20 10:44:45.164825] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.515 [2024-11-20 10:44:45.164860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-11-20 10:44:45.164871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.515 [2024-11-20 10:44:45.170225] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.515 [2024-11-20 10:44:45.170252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-11-20 10:44:45.170260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.515 [2024-11-20 10:44:45.173089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.515 [2024-11-20 10:44:45.173112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-11-20 10:44:45.173121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.515 [2024-11-20 10:44:45.178702] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.515 [2024-11-20 10:44:45.178724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-11-20 10:44:45.178733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.515 [2024-11-20 10:44:45.184088] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.515 [2024-11-20 10:44:45.184109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-11-20 10:44:45.184118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.515 [2024-11-20 10:44:45.189401] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.515 [2024-11-20 10:44:45.189434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:14208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-11-20 10:44:45.189443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.515 [2024-11-20 10:44:45.194760] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.515 [2024-11-20 10:44:45.194783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-11-20 10:44:45.194791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.515 [2024-11-20 10:44:45.200159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.515 [2024-11-20 10:44:45.200180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-11-20 10:44:45.200192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.515 [2024-11-20 10:44:45.205669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.515 [2024-11-20 10:44:45.205691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-11-20 10:44:45.205700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.515 [2024-11-20 10:44:45.211118] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.515 [2024-11-20 10:44:45.211139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-11-20 10:44:45.211148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.515 [2024-11-20 10:44:45.216472] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.515 [2024-11-20 10:44:45.216496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-11-20 10:44:45.216504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.515 [2024-11-20 10:44:45.222001] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.515 [2024-11-20 10:44:45.222024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-11-20 10:44:45.222032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.515 [2024-11-20 10:44:45.227041] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.515 [2024-11-20 10:44:45.227063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-11-20 10:44:45.227072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.515 [2024-11-20 10:44:45.232627] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.515 [2024-11-20 10:44:45.232649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-11-20 10:44:45.232656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.515 [2024-11-20 10:44:45.237731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.515 [2024-11-20 10:44:45.237753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.515 [2024-11-20 10:44:45.237762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.243008] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.243030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.243037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.248261] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.248286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.248294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.253463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.253485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.253493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.258660] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.258682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.258690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.263893] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.263913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:21632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.263922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.269077] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.269099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.269107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.274260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.274282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.274290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.279417] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.279439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.279447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.284611] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.284632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:3040 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.284640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.289814] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.289835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.289843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.294942] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.294963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.294971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.300064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.300084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.300093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.305232] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.305252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:13024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.305260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.310410] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.310431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:4160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.310439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.315524] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.315546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:20064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.315555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.320696] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.320717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.320726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.325877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.325899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.325907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.331058] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.331079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.331087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.336257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.336277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.336289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.341478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.341500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.341508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.346690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.346711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:23808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.346719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.351858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.351879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.351887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.357074] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.357095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:13536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.357103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.775 [2024-11-20 10:44:45.362242] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.775 [2024-11-20 10:44:45.362263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.775 [2024-11-20 10:44:45.362272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.367414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.367435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.367443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.372588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.372610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.372618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.377740] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.377762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.377770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.382905] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.382934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.382942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.388091] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.388113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.388121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.393284] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.393306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.393314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.398478] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.398500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:11680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.398508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.403638] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.403659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.403668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.408832] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.408853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.408861] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.413991] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.414011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:13504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.414019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.419171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.419193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.419206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.424375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.424397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.424405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.429613] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.429635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.429643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.434791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.434812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.434820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.439988] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.440010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.440018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.445217] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.445238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.445246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.450404] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.450425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.450433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.455671] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.455692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.455700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.460817] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.460838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:17440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.460846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.465951] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.465972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:4224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.465980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.471073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.471095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:7168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.471106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.476230] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.476252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.476260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.481347] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.481368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.481376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.486454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.486475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.486483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.491656] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.491679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.491688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:04.776 [2024-11-20 10:44:45.496851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:04.776 [2024-11-20 10:44:45.496873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:04.776 [2024-11-20 10:44:45.496881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.035 [2024-11-20 10:44:45.502101] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.035 [2024-11-20 10:44:45.502124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.035 [2024-11-20 10:44:45.502134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.035 [2024-11-20 10:44:45.507399] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.035 [2024-11-20 10:44:45.507421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.035 [2024-11-20 10:44:45.507429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.035 [2024-11-20 10:44:45.512563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.035 [2024-11-20 10:44:45.512585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.035 [2024-11-20 10:44:45.512594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.035 [2024-11-20 10:44:45.517764] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.035 [2024-11-20 10:44:45.517785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.517794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.522907] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.522929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.522936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.528064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.528085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:7008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.528093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.533168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.533189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.533197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.538340] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.538362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:14176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.538370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.543506] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.543527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.543536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.548640] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.548661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.548669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.553767] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.553787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.553795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.558870] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.558891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:19360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.558903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.564046] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.564068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.564076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.569259] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.569281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.569289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.574467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.574488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:20992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.574497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.579669] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.579692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.579700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.584854] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.584876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:17952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.584884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.590044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.590065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.590073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.595209] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.595230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.595239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.600368] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.600390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.600399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.605570] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.605595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.605604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.610742] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.610764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.610772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.615990] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.616012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.616020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.621226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.621263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.621271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.626418] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.626439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.626447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.631540] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.631561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.631569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.636715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.636737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:7072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.636745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.641916] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.641937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.641945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.647119] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.036 [2024-11-20 10:44:45.647140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.036 [2024-11-20 10:44:45.647148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.036 [2024-11-20 10:44:45.652398] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.652420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.652428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.657868] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.657890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.657898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.663348] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.663369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.663377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.668732] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.668754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.668762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.674066] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.674088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:13856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.674096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.679481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.679504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.679515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.684982] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.685006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.685016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.690343] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.690365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.690373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.695924] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.695946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:3904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.695958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.701449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.701483] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:22528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.701491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.707060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.707084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.707092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.712545] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.712567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.712576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.715582] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.715607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.715618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.720560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.720583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:20800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.720591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.725698] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.725721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.725729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.731701] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.731724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.731733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.737004] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.737027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.737035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.742298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.742323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.742331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.747237] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.747260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.747268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.752996] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.753019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.753027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.037 [2024-11-20 10:44:45.758544] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.037 [2024-11-20 10:44:45.758568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.037 [2024-11-20 10:44:45.758576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.296 [2024-11-20 10:44:45.764084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.296 [2024-11-20 10:44:45.764106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.296 [2024-11-20 10:44:45.764115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.296 [2024-11-20 10:44:45.771375] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.296 [2024-11-20 10:44:45.771398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.296 [2024-11-20 10:44:45.771406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.296 [2024-11-20 10:44:45.778387] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.296 [2024-11-20 10:44:45.778411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.296 [2024-11-20 10:44:45.778420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.296 [2024-11-20 10:44:45.786073] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.296 [2024-11-20 10:44:45.786096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.296 [2024-11-20 10:44:45.786104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.296 [2024-11-20 10:44:45.793943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.296 [2024-11-20 10:44:45.793967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.296 [2024-11-20 10:44:45.793979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.296 [2024-11-20 10:44:45.802420] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.296 [2024-11-20 10:44:45.802444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.296 [2024-11-20 10:44:45.802452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.296 [2024-11-20 10:44:45.810823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.296 [2024-11-20 10:44:45.810846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.296 [2024-11-20 10:44:45.810854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.296 [2024-11-20 10:44:45.819414] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.296 [2024-11-20 10:44:45.819436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:11904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.296 [2024-11-20 10:44:45.819444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.296 [2024-11-20 10:44:45.827772] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.296 [2024-11-20 10:44:45.827796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:14752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.296 [2024-11-20 10:44:45.827805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.296 [2024-11-20 10:44:45.836109] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.296 [2024-11-20 10:44:45.836133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:21248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.296 [2024-11-20 10:44:45.836141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.296 [2024-11-20 10:44:45.844463] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.296 [2024-11-20 10:44:45.844486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.296 [2024-11-20 10:44:45.844495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.296 [2024-11-20 10:44:45.853047] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.296 [2024-11-20 10:44:45.853070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.296 [2024-11-20 10:44:45.853079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.296 [2024-11-20 10:44:45.861236] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.296 [2024-11-20 10:44:45.861259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.296 [2024-11-20 10:44:45.861267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.296 [2024-11-20 10:44:45.869571] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.296 [2024-11-20 10:44:45.869599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.296 [2024-11-20 10:44:45.869608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.296 [2024-11-20 10:44:45.877429] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.296 [2024-11-20 10:44:45.877452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.296 [2024-11-20 10:44:45.877461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.296 [2024-11-20 10:44:45.885172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.296 [2024-11-20 10:44:45.885195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.296 [2024-11-20 10:44:45.885212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.296 [2024-11-20 10:44:45.893115] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.296 [2024-11-20 10:44:45.893138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.296 [2024-11-20 10:44:45.893147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:45.900896] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:45.900918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:45.900927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:45.909064] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:45.909088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:45.909096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:45.916600] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:45.916623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:45.916632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:45.923336] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:45.923360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:45.923368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:45.930286] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:45.930310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:45.930319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:45.937362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:45.937386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:45.937395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:45.943813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:45.943837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:19136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:45.943845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:45.949220] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:45.949259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:45.949267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:45.954711] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:45.954734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:45.954742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:45.960267] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:45.960289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:45.960298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:45.965750] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:45.965772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:45.965780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:45.971561] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:45.971584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:45.971592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:45.977156] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:45.977178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:21216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:45.977186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:45.982507] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:45.982530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:18112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:45.982542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:45.987917] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:45.987939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:45.987947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:45.993238] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:45.993260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:45.993268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:45.999032] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:45.999054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:45.999062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:46.005863] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:46.005886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:6944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:46.005895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:46.013634] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:46.013657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:24672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:46.013665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.297 [2024-11-20 10:44:46.021187] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.297 [2024-11-20 10:44:46.021219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.297 [2024-11-20 10:44:46.021228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.028708] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.028731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.028740] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.034534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.034557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.034565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.040040] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.040067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:14528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.040076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.045483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.045505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.045513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.051366] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.051389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.051398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.056939] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.056962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.056971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.062467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.062489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.062498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.067038] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.067064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.067072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.072235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.072258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.072267] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.077457] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.077479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.077487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:7 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.082744] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.082767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.082775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.088002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.088025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.088033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.093380] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.093402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.093410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.098823] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.098844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.098853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.104176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.104199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.104214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.109402] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.109424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.109432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.112191] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.112219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.112227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.117510] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.117532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.117540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.122800] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.122822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.122830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.128171] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.128192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.128212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.133283] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.133304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.133312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.138454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.138475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.557 [2024-11-20 10:44:46.138483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.557 [2024-11-20 10:44:46.143737] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.557 [2024-11-20 10:44:46.143759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.143767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.149131] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.149153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.149162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.154357] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.154378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.154386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.558 5457.00 IOPS, 682.12 MiB/s [2024-11-20T09:44:46.289Z] [2024-11-20 10:44:46.161084] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.161106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.161114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.166425] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.166447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.166455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.171688] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.171709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.171717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.176972] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.176993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.177001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.182295] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.182315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.182323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.187560] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.187582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.187590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.193110] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.193132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.193140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.198596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.198617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.198626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.204089] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.204111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.204119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.209937] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.209957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.209965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.215044] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.215065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.215073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.220298] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.220320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.220333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.225588] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.225610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:22016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.225618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.230741] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.230762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.230770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.235953] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.235975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.235983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.241135] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.241156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.241164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.246539] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.246560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.246568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.252095] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.252117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.252125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.257783] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.257805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.257813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.263155] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.263175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.263185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.268411] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.268437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.268445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.273692] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.273715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.273723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.558 [2024-11-20 10:44:46.279061] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.558 [2024-11-20 10:44:46.279084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.558 [2024-11-20 10:44:46.279092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.817 [2024-11-20 10:44:46.284439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.817 [2024-11-20 10:44:46.284461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.817 [2024-11-20 10:44:46.284469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.817 [2024-11-20 10:44:46.289979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.817 [2024-11-20 10:44:46.290010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.817 [2024-11-20 10:44:46.290019] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.817 [2024-11-20 10:44:46.295467] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.817 [2024-11-20 10:44:46.295488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.817 [2024-11-20 10:44:46.295496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.817 [2024-11-20 10:44:46.300913] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.300934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.300942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.306428] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.306449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.306457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.311877] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.311898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.311906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.317078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.317102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.317110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.322409] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.322431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.322439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.327751] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.327773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.327782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.333050] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.333072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.333080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.338328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.338350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.338358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.343567] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.343588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.343595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.349028] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.349050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.349059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.354328] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.354350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.354358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.359820] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.359845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.359853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.365258] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.365280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.365288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.370826] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.370849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.370857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.376563] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.376586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.376595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.381927] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.381947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.381956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.387193] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.387219] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.387228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.392454] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.392475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:14880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.392483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.397815] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.397837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.397846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.402999] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.403020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.403028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.408438] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.408459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.408467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.413859] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.413880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.413888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.419228] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.419248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.419256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.424743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.424764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.424772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.818 [2024-11-20 10:44:46.430130] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.818 [2024-11-20 10:44:46.430151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.818 [2024-11-20 10:44:46.430160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.819 [2024-11-20 10:44:46.435416] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.819 [2024-11-20 10:44:46.435438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:8224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.819 [2024-11-20 10:44:46.435446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.819 [2024-11-20 10:44:46.441229] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.819 [2024-11-20 10:44:46.441250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.819 [2024-11-20 10:44:46.441259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.819 [2024-11-20 10:44:46.447943] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.819 [2024-11-20 10:44:46.447965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.819 [2024-11-20 10:44:46.447974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.819 [2024-11-20 10:44:46.455235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.819 [2024-11-20 10:44:46.455257] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.819 [2024-11-20 10:44:46.455268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.819 [2024-11-20 10:44:46.462364] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.819 [2024-11-20 10:44:46.462386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.819 [2024-11-20 10:44:46.462394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.819 [2024-11-20 10:44:46.470024] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.819 [2024-11-20 10:44:46.470046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.819 [2024-11-20 10:44:46.470054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.819 [2024-11-20 10:44:46.477483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.819 [2024-11-20 10:44:46.477505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.819 [2024-11-20 10:44:46.477513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.819 [2024-11-20 10:44:46.484795] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.819 [2024-11-20 10:44:46.484819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.819 [2024-11-20 10:44:46.484827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.819 [2024-11-20 10:44:46.492761] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.819 [2024-11-20 10:44:46.492783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:4608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.819 [2024-11-20 10:44:46.492792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.819 [2024-11-20 10:44:46.498778] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.819 [2024-11-20 10:44:46.498800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.819 [2024-11-20 10:44:46.498808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.819 [2024-11-20 10:44:46.504891] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.819 [2024-11-20 10:44:46.504913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.819 [2024-11-20 10:44:46.504922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.819 [2024-11-20 10:44:46.510452] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.819 [2024-11-20 10:44:46.510473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:6336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.819 [2024-11-20 10:44:46.510481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.819 [2024-11-20 10:44:46.515808] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.819 [2024-11-20 10:44:46.515833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.819 [2024-11-20 10:44:46.515841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.819 [2024-11-20 10:44:46.521078] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.819 [2024-11-20 10:44:46.521100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.819 [2024-11-20 10:44:46.521108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:05.819 [2024-11-20 10:44:46.526483] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.819 [2024-11-20 10:44:46.526505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.819 [2024-11-20 10:44:46.526513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:05.819 [2024-11-20 10:44:46.531525] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.819 [2024-11-20 10:44:46.531548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.819 [2024-11-20 10:44:46.531557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:05.819 [2024-11-20 10:44:46.536987] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.819 [2024-11-20 10:44:46.537009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:23360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.819 [2024-11-20 10:44:46.537017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:05.819 [2024-11-20 10:44:46.542362] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:05.819 [2024-11-20 10:44:46.542384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:05.819 [2024-11-20 10:44:46.542393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.547644] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.547666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.547674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.552858] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.552879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.552887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.557962] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.557983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:19584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.557991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.563208] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.563230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.563238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.568789] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.568810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.568818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.574593] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.574615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.574624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.579963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.579985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:1888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.579993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.585439] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.585461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.585469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.590947] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.590968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.590976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.596367] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.596389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.596396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.601963] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.601984] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.601992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.607314] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.607336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.607347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.613060] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.613083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.613091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.620361] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.620383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:1600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.620392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.628176] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.628199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:13728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.628213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.635151] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.635174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.635183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.642057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.642079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.642087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.650002] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.650026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.650035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.657755] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.657779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.657789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.665514] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.665539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.665547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.674006] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.674031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:1568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.079 [2024-11-20 10:44:46.674042] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.079 [2024-11-20 10:44:46.682260] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.079 [2024-11-20 10:44:46.682283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.682292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.080 [2024-11-20 10:44:46.690039] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.080 [2024-11-20 10:44:46.690063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.690072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.080 [2024-11-20 10:44:46.696853] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.080 [2024-11-20 10:44:46.696878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.696888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.080 [2024-11-20 10:44:46.704172] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.080 [2024-11-20 10:44:46.704194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.704208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.080 [2024-11-20 10:44:46.711168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.080 [2024-11-20 10:44:46.711192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.711200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.080 [2024-11-20 10:44:46.718595] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.080 [2024-11-20 10:44:46.718619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.718628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.080 [2024-11-20 10:44:46.726534] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.080 [2024-11-20 10:44:46.726557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.726566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.080 [2024-11-20 10:44:46.734322] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.080 [2024-11-20 10:44:46.734345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.734357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.080 [2024-11-20 10:44:46.741157] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.080 [2024-11-20 10:44:46.741180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.741189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.080 [2024-11-20 10:44:46.748075] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.080 [2024-11-20 10:44:46.748099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.748107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.080 [2024-11-20 10:44:46.754435] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.080 [2024-11-20 10:44:46.754458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.754466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.080 [2024-11-20 10:44:46.759824] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.080 [2024-11-20 10:44:46.759845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:12768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.759853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.080 [2024-11-20 10:44:46.765159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.080 [2024-11-20 10:44:46.765180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.765189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.080 [2024-11-20 10:44:46.770481] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.080 [2024-11-20 10:44:46.770503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.770511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.080 [2024-11-20 10:44:46.775666] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.080 [2024-11-20 10:44:46.775688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.775697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.080 [2024-11-20 10:44:46.780862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.080 [2024-11-20 10:44:46.780883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.780891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.080 [2024-11-20 10:44:46.786142] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.080 [2024-11-20 10:44:46.786167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.786175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.080 [2024-11-20 10:44:46.791412] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.080 [2024-11-20 10:44:46.791434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.791442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.080 [2024-11-20 10:44:46.796629] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.080 [2024-11-20 10:44:46.796651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.796659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.080 [2024-11-20 10:44:46.801874] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.080 [2024-11-20 10:44:46.801895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:24736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.080 [2024-11-20 10:44:46.801903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.338 [2024-11-20 10:44:46.807226] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.338 [2024-11-20 10:44:46.807247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.338 [2024-11-20 10:44:46.807256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.338 [2024-11-20 10:44:46.812596] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.338 [2024-11-20 10:44:46.812618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.338 [2024-11-20 10:44:46.812626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.338 [2024-11-20 10:44:46.817945] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.338 [2024-11-20 10:44:46.817967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.338 [2024-11-20 10:44:46.817975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.338 [2024-11-20 10:44:46.823248] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.338 [2024-11-20 10:44:46.823269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.338 [2024-11-20 10:44:46.823278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.338 [2024-11-20 10:44:46.828407] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.338 [2024-11-20 10:44:46.828429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.338 [2024-11-20 10:44:46.828437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.338 [2024-11-20 10:44:46.833601] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.338 [2024-11-20 10:44:46.833622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.338 [2024-11-20 10:44:46.833631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.338 [2024-11-20 10:44:46.838810] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.338 [2024-11-20 10:44:46.838832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.338 [2024-11-20 10:44:46.838840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.338 [2024-11-20 10:44:46.844052] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.338 [2024-11-20 10:44:46.844074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.338 [2024-11-20 10:44:46.844082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.338 [2024-11-20 10:44:46.849274] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.338 [2024-11-20 10:44:46.849295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.338 [2024-11-20 10:44:46.849303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.338 [2024-11-20 10:44:46.854511] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.338 [2024-11-20 10:44:46.854533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.854541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.859697] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.859717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.859725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.864882] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.864902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.864910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.870037] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.870059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.870067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.875234] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.875255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.875269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.880447] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.880468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.880476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.885690] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.885711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.885719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.891224] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.891246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.891254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.896501] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.896523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:13088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.896532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.901813] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.901835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.901842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.907053] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.907074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.907082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.912344] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.912365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:22624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.912373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.917558] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.917580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:9472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.917588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.922724] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.922750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.922758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.927898] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.927920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.927929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.933121] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.933143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.933151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.938320] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.938341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:3360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.938349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.943468] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.943489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:17792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.943497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.948731] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.948753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.948762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.953971] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.953993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.954001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.959168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.959189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.959197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.964408] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.964428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.964437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.969617] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.969639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.969646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.974862] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.974884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.974892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.980057] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.980079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.980087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.985200] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.985226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:7904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.985234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.990346] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.990367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.990375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:46.995449] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:46.995470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:46.995478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:47.000606] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:47.000627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:15552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:47.000635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:47.005743] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:47.005765] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:22400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:47.005773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:47.010894] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:47.010917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:47.010928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:47.016035] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:47.016057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:47.016065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:47.021138] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:47.021160] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:47.021168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:47.026315] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:47.026336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:12384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:47.026344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:47.031427] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:47.031448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:47.031456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:47.036559] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:47.036581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:47.036589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:47.041715] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:47.041737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:47.041746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:47.046857] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:47.046878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:47.046885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:47.052021] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:47.052042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:47.052050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:47.057221] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:47.057243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:47.057251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.339 [2024-11-20 10:44:47.062460] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.339 [2024-11-20 10:44:47.062481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.339 [2024-11-20 10:44:47.062489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.598 [2024-11-20 10:44:47.067705] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.598 [2024-11-20 10:44:47.067726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.598 [2024-11-20 10:44:47.067735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.598 [2024-11-20 10:44:47.072970] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.598 [2024-11-20 10:44:47.072990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.598 [2024-11-20 10:44:47.072999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.598 [2024-11-20 10:44:47.078168] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.598 [2024-11-20 10:44:47.078190] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.598 [2024-11-20 10:44:47.078200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.598 [2024-11-20 10:44:47.083382] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.598 [2024-11-20 10:44:47.083404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.598 [2024-11-20 10:44:47.083411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.598 [2024-11-20 10:44:47.088591] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.598 [2024-11-20 10:44:47.088613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.598 [2024-11-20 10:44:47.088621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.598 [2024-11-20 10:44:47.093851] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.598 [2024-11-20 10:44:47.093873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.598 [2024-11-20 10:44:47.093881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.598 [2024-11-20 10:44:47.099069] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.598 [2024-11-20 10:44:47.099091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.598 [2024-11-20 10:44:47.099103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.598 [2024-11-20 10:44:47.104257] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.598 [2024-11-20 10:44:47.104278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.598 [2024-11-20 10:44:47.104286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.598 [2024-11-20 10:44:47.109405] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.598 [2024-11-20 10:44:47.109427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:3680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.598 [2024-11-20 10:44:47.109435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.598 [2024-11-20 10:44:47.114564] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.598 [2024-11-20 10:44:47.114586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.598 [2024-11-20 10:44:47.114595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.598 [2024-11-20 10:44:47.119784] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.598 [2024-11-20 10:44:47.119806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.598 [2024-11-20 10:44:47.119814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.598 [2024-11-20 10:44:47.124979] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.598 [2024-11-20 10:44:47.125003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.598 [2024-11-20 10:44:47.125011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.598 [2024-11-20 10:44:47.130054] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.598 [2024-11-20 10:44:47.130075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:4864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.598 [2024-11-20 10:44:47.130083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.598 [2024-11-20 10:44:47.135360] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.598 [2024-11-20 10:44:47.135382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:20288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.598 [2024-11-20 10:44:47.135390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.598 [2024-11-20 10:44:47.140159] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.598 [2024-11-20 10:44:47.140182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.598 [2024-11-20 10:44:47.140190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.598 [2024-11-20 10:44:47.145235] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.598 [2024-11-20 10:44:47.145263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:14624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.598 [2024-11-20 10:44:47.145272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:06.598 [2024-11-20 10:44:47.150174] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.598 [2024-11-20 10:44:47.150196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.598 [2024-11-20 10:44:47.150213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:06.598 [2024-11-20 10:44:47.155222] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.598 [2024-11-20 10:44:47.155243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.598 [2024-11-20 10:44:47.155251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:06.598 5501.00 IOPS, 687.62 MiB/s [2024-11-20T09:44:47.329Z] [2024-11-20 10:44:47.161791] nvme_tcp.c:1365:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x14c1a30) 00:27:06.598 [2024-11-20 10:44:47.161815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:06.598 [2024-11-20 10:44:47.161824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:06.598 00:27:06.598 Latency(us) 00:27:06.598 [2024-11-20T09:44:47.329Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.598 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:27:06.598 nvme0n1 : 2.00 5500.63 687.58 0.00 0.00 2905.67 620.25 13481.69 00:27:06.598 [2024-11-20T09:44:47.329Z] =================================================================================================================== 00:27:06.598 [2024-11-20T09:44:47.329Z] Total : 5500.63 687.58 0.00 0.00 2905.67 620.25 13481.69 00:27:06.598 { 00:27:06.598 "results": [ 00:27:06.598 { 00:27:06.598 "job": "nvme0n1", 00:27:06.598 "core_mask": "0x2", 00:27:06.598 "workload": "randread", 00:27:06.598 "status": "finished", 00:27:06.598 "queue_depth": 16, 00:27:06.598 "io_size": 131072, 00:27:06.598 "runtime": 2.003043, 00:27:06.598 "iops": 5500.6307902526305, 00:27:06.598 "mibps": 687.5788487815788, 00:27:06.598 "io_failed": 0, 00:27:06.598 "io_timeout": 0, 00:27:06.598 "avg_latency_us": 2905.6660529523115, 00:27:06.598 "min_latency_us": 620.2514285714286, 00:27:06.598 "max_latency_us": 13481.691428571428 00:27:06.598 } 00:27:06.598 ], 00:27:06.598 "core_count": 1 00:27:06.598 } 00:27:06.598 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:06.599 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:06.599 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:06.599 | .driver_specific 00:27:06.599 | .nvme_error 00:27:06.599 | .status_code 00:27:06.599 | .command_transient_transport_error' 00:27:06.599 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:06.874 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 356 > 0 )) 00:27:06.874 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3372042 00:27:06.874 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3372042 ']' 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3372042 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3372042 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3372042' 00:27:06.875 killing process with pid 3372042 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3372042 00:27:06.875 Received shutdown signal, test time was about 2.000000 seconds 00:27:06.875 00:27:06.875 Latency(us) 00:27:06.875 [2024-11-20T09:44:47.606Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:06.875 [2024-11-20T09:44:47.606Z] =================================================================================================================== 00:27:06.875 [2024-11-20T09:44:47.606Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3372042 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3372713 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3372713 /var/tmp/bperf.sock 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3372713 ']' 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:06.875 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:06.875 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:06.876 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:06.876 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.138 [2024-11-20 10:44:47.639064] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:27:07.138 [2024-11-20 10:44:47.639108] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3372713 ] 00:27:07.138 [2024-11-20 10:44:47.713420] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:07.138 [2024-11-20 10:44:47.755349] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:07.138 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:07.138 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:07.138 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:07.138 10:44:47 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:07.394 10:44:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:07.394 10:44:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.394 10:44:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.394 10:44:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.394 10:44:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:07.394 10:44:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:07.960 nvme0n1 00:27:07.960 10:44:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:27:07.960 10:44:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:07.960 10:44:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:07.960 10:44:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:07.960 10:44:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:07.960 10:44:48 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:07.960 Running I/O for 2 seconds... 00:27:07.960 [2024-11-20 10:44:48.577729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e1710 00:27:07.960 [2024-11-20 10:44:48.578846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:23788 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.960 [2024-11-20 10:44:48.578876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:07.960 [2024-11-20 10:44:48.586629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e9e10 00:27:07.960 [2024-11-20 10:44:48.587508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:13715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.960 [2024-11-20 10:44:48.587531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:07.960 [2024-11-20 10:44:48.596803] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f35f0 00:27:07.960 [2024-11-20 10:44:48.598192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:13715 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.960 [2024-11-20 10:44:48.598216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.960 [2024-11-20 10:44:48.606146] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fa3a0 00:27:07.960 [2024-11-20 10:44:48.607595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:20859 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.960 [2024-11-20 10:44:48.607616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:07.960 [2024-11-20 10:44:48.613015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f7970 00:27:07.960 [2024-11-20 10:44:48.613840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:19236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.960 [2024-11-20 10:44:48.613860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:07.960 [2024-11-20 10:44:48.622925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e9e10 00:27:07.960 [2024-11-20 10:44:48.623546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:20536 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.960 [2024-11-20 10:44:48.623566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:07.960 [2024-11-20 10:44:48.631993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ecc78 00:27:07.960 [2024-11-20 10:44:48.632843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:16853 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.960 [2024-11-20 10:44:48.632863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:07.960 [2024-11-20 10:44:48.641113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e84c0 00:27:07.960 [2024-11-20 10:44:48.641963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:23856 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.960 [2024-11-20 10:44:48.641983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:07.960 [2024-11-20 10:44:48.650215] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ed4e8 00:27:07.960 [2024-11-20 10:44:48.651069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:20488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.960 [2024-11-20 10:44:48.651090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:07.961 [2024-11-20 10:44:48.659419] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e7c50 00:27:07.961 [2024-11-20 10:44:48.660332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:24295 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.961 [2024-11-20 10:44:48.660352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:07.961 [2024-11-20 10:44:48.668942] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e3498 00:27:07.961 [2024-11-20 10:44:48.669662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:10738 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.961 [2024-11-20 10:44:48.669681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:07.961 [2024-11-20 10:44:48.677686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fbcf0 00:27:07.961 [2024-11-20 10:44:48.678307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:15412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.961 [2024-11-20 10:44:48.678327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:07.961 [2024-11-20 10:44:48.686478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f5be8 00:27:07.961 [2024-11-20 10:44:48.687387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:24656 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:07.961 [2024-11-20 10:44:48.687407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:08.219 [2024-11-20 10:44:48.696028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f8a50 00:27:08.219 [2024-11-20 10:44:48.696864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6246 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.219 [2024-11-20 10:44:48.696884] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:08.219 [2024-11-20 10:44:48.706927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f8a50 00:27:08.219 [2024-11-20 10:44:48.708223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:6678 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.219 [2024-11-20 10:44:48.708243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:08.219 [2024-11-20 10:44:48.713408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166eaab8 00:27:08.219 [2024-11-20 10:44:48.714092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:16000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.219 [2024-11-20 10:44:48.714111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:08.219 [2024-11-20 10:44:48.722607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e0630 00:27:08.219 [2024-11-20 10:44:48.723281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.219 [2024-11-20 10:44:48.723301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:08.219 [2024-11-20 10:44:48.732726] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f5378 00:27:08.219 [2024-11-20 10:44:48.733438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:25442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.219 [2024-11-20 10:44:48.733458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:08.219 [2024-11-20 10:44:48.741636] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f5378 00:27:08.220 [2024-11-20 10:44:48.742348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:19934 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.742368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.750629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f5378 00:27:08.220 [2024-11-20 10:44:48.751435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.751455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.759616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f5378 00:27:08.220 [2024-11-20 10:44:48.760410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:8716 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.760429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.768634] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f5378 00:27:08.220 [2024-11-20 10:44:48.769439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15487 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.769461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.777652] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f5378 00:27:08.220 [2024-11-20 10:44:48.778396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1625 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.778416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.786620] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f5378 00:27:08.220 [2024-11-20 10:44:48.787422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:13702 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.787441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.795650] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f5378 00:27:08.220 [2024-11-20 10:44:48.796460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:1820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.796479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.804628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f5378 00:27:08.220 [2024-11-20 10:44:48.805439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:4174 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.805460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.813823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f5378 00:27:08.220 [2024-11-20 10:44:48.814642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:1028 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.814662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.822492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f7da8 00:27:08.220 [2024-11-20 10:44:48.823335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:21732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.823355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.834397] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fcdd0 00:27:08.220 [2024-11-20 10:44:48.835766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9924 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.835787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.844777] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e73e0 00:27:08.220 [2024-11-20 10:44:48.846065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:5457 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.846086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.853955] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f46d0 00:27:08.220 [2024-11-20 10:44:48.854932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19659 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.854954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.864448] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f46d0 00:27:08.220 [2024-11-20 10:44:48.865884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:21711 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.865904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.872454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fc128 00:27:08.220 [2024-11-20 10:44:48.873396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:10219 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.873416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.882738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fc128 00:27:08.220 [2024-11-20 10:44:48.884234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:19959 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.884253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.889200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e01f8 00:27:08.220 [2024-11-20 10:44:48.890000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:6670 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.890020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.898646] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e6b70 00:27:08.220 [2024-11-20 10:44:48.899577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:7154 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.899596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.907627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e0ea0 00:27:08.220 [2024-11-20 10:44:48.908233] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:11236 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.908253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.916557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166eff18 00:27:08.220 [2024-11-20 10:44:48.917145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:3749 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.917165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.926557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ed4e8 00:27:08.220 [2024-11-20 10:44:48.927691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:15000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.927711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.935964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ef6a8 00:27:08.220 [2024-11-20 10:44:48.937123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:7654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.937143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:27:08.220 [2024-11-20 10:44:48.943839] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f6458 00:27:08.220 [2024-11-20 10:44:48.944582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:17527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.220 [2024-11-20 10:44:48.944602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:48.953554] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f2510 00:27:08.479 [2024-11-20 10:44:48.954505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:7083 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:48.954524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:48.962717] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f4298 00:27:08.479 [2024-11-20 10:44:48.963642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:2216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:48.963661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:48.971994] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e5ec8 00:27:08.479 [2024-11-20 10:44:48.972944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:2357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:48.972962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:48.983077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e01f8 00:27:08.479 [2024-11-20 10:44:48.984485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:16857 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:48.984506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:48.992031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f5378 00:27:08.479 [2024-11-20 10:44:48.993539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:18249 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:48.993558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:48.998653] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f7970 00:27:08.479 [2024-11-20 10:44:48.999466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:8172 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:48.999485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:49.009791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ea680 00:27:08.479 [2024-11-20 10:44:49.011066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:10583 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:49.011092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:58 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:49.019218] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e7818 00:27:08.479 [2024-11-20 10:44:49.020602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:15534 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:49.020621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:49.028343] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f9b30 00:27:08.479 [2024-11-20 10:44:49.029729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:6066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:49.029748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:49.034466] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e2c28 00:27:08.479 [2024-11-20 10:44:49.035153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:23425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:49.035172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:49.043890] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e6fa8 00:27:08.479 [2024-11-20 10:44:49.044724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:5305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:49.044743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:49.053591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e1f80 00:27:08.479 [2024-11-20 10:44:49.054420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:12301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:49.054440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:49.062083] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e99d8 00:27:08.479 [2024-11-20 10:44:49.062788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:18697 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:49.062808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:49.073693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ef270 00:27:08.479 [2024-11-20 10:44:49.075195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:20467 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:49.075216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:49.080165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f1ca0 00:27:08.479 [2024-11-20 10:44:49.080982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:49.081001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:49.089562] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f57b0 00:27:08.479 [2024-11-20 10:44:49.090518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13179 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:49.090540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:49.100832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f6458 00:27:08.479 [2024-11-20 10:44:49.102260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:17244 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:49.102279] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:31 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:49.107567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f6cc8 00:27:08.479 [2024-11-20 10:44:49.108324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:23874 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:49.108343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:95 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:49.118881] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ff3c8 00:27:08.479 [2024-11-20 10:44:49.119997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24969 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:49.120016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:49.126683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ef270 00:27:08.479 [2024-11-20 10:44:49.127346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:2285 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:49.127365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:49.136040] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fa3a0 00:27:08.479 [2024-11-20 10:44:49.137055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:176 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:49.137075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:27:08.479 [2024-11-20 10:44:49.147214] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fb8b8 00:27:08.479 [2024-11-20 10:44:49.148685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:22745 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.479 [2024-11-20 10:44:49.148705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:08.480 [2024-11-20 10:44:49.153525] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ec840 00:27:08.480 [2024-11-20 10:44:49.154163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:1146 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.480 [2024-11-20 10:44:49.154191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:27:08.480 [2024-11-20 10:44:49.162045] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e84c0 00:27:08.480 [2024-11-20 10:44:49.162697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:14025 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.480 [2024-11-20 10:44:49.162717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:27:08.480 [2024-11-20 10:44:49.173243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e6fa8 00:27:08.480 [2024-11-20 10:44:49.174375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:14067 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.480 [2024-11-20 10:44:49.174395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:27:08.480 [2024-11-20 10:44:49.182312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ed4e8 00:27:08.480 [2024-11-20 10:44:49.182993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:8087 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.480 [2024-11-20 10:44:49.183013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:27:08.480 [2024-11-20 10:44:49.190947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fc128 00:27:08.480 [2024-11-20 10:44:49.192039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:20912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.480 [2024-11-20 10:44:49.192059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:08.480 [2024-11-20 10:44:49.200084] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ef6a8 00:27:08.480 [2024-11-20 10:44:49.201102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:13797 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.480 [2024-11-20 10:44:49.201121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:08.738 [2024-11-20 10:44:49.209451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e9168 00:27:08.738 [2024-11-20 10:44:49.210494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14858 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.738 [2024-11-20 10:44:49.210514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:08.738 [2024-11-20 10:44:49.218830] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e49b0 00:27:08.738 [2024-11-20 10:44:49.219859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:18378 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.738 [2024-11-20 10:44:49.219879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:08.738 [2024-11-20 10:44:49.227429] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f2510 00:27:08.738 [2024-11-20 10:44:49.228320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:3140 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.738 [2024-11-20 10:44:49.228340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:08.738 [2024-11-20 10:44:49.236939] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fc128 00:27:08.738 [2024-11-20 10:44:49.238096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:18310 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.738 [2024-11-20 10:44:49.238115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:08.738 [2024-11-20 10:44:49.246071] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f0350 00:27:08.738 [2024-11-20 10:44:49.246803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:10692 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.738 [2024-11-20 10:44:49.246823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:08.738 [2024-11-20 10:44:49.254710] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f92c0 00:27:08.738 [2024-11-20 10:44:49.255810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:10030 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.738 [2024-11-20 10:44:49.255830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:08.738 [2024-11-20 10:44:49.263718] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e5ec8 00:27:08.738 [2024-11-20 10:44:49.264562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:24125 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.738 [2024-11-20 10:44:49.264581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:27 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:08.738 [2024-11-20 10:44:49.273154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e4de8 00:27:08.738 [2024-11-20 10:44:49.274278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:19297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.738 [2024-11-20 10:44:49.274298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:08.738 [2024-11-20 10:44:49.281676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f4f40 00:27:08.738 [2024-11-20 10:44:49.282616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:12260 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.738 [2024-11-20 10:44:49.282636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:08.738 [2024-11-20 10:44:49.290103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ebb98 00:27:08.738 [2024-11-20 10:44:49.290936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:12529 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.739 [2024-11-20 10:44:49.290955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:69 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:08.739 [2024-11-20 10:44:49.298772] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ea248 00:27:08.739 [2024-11-20 10:44:49.299619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:12951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.739 [2024-11-20 10:44:49.299639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:08.739 [2024-11-20 10:44:49.309882] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e49b0 00:27:08.739 [2024-11-20 10:44:49.311062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:17820 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.739 [2024-11-20 10:44:49.311081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:08.739 [2024-11-20 10:44:49.318452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e49b0 00:27:08.739 [2024-11-20 10:44:49.319495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:21017 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.739 [2024-11-20 10:44:49.319513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:08.739 [2024-11-20 10:44:49.327252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166de8a8 00:27:08.739 [2024-11-20 10:44:49.328112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1970 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.739 [2024-11-20 10:44:49.328135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.739 [2024-11-20 10:44:49.336149] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166de8a8 00:27:08.739 [2024-11-20 10:44:49.337014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:24379 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.739 [2024-11-20 10:44:49.337034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:08.739 [2024-11-20 10:44:49.344572] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e6738 00:27:08.739 [2024-11-20 10:44:49.345541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:9420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.739 [2024-11-20 10:44:49.345560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:08.739 [2024-11-20 10:44:49.353918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fd208 00:27:08.739 [2024-11-20 10:44:49.354931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:19464 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.739 [2024-11-20 10:44:49.354951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:27:08.739 [2024-11-20 10:44:49.362642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fc128 00:27:08.739 [2024-11-20 10:44:49.363417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:21372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.739 [2024-11-20 10:44:49.363437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.739 [2024-11-20 10:44:49.372070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166dfdc0 00:27:08.739 [2024-11-20 10:44:49.372942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23297 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.739 [2024-11-20 10:44:49.372962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:08.739 [2024-11-20 10:44:49.382376] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f1868 00:27:08.739 [2024-11-20 10:44:49.383674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:6136 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.739 [2024-11-20 10:44:49.383693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:27:08.739 [2024-11-20 10:44:49.390326] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ee190 00:27:08.739 [2024-11-20 10:44:49.390859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2369 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.739 [2024-11-20 10:44:49.390879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:08.739 [2024-11-20 10:44:49.399657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e9168 00:27:08.739 [2024-11-20 10:44:49.400261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17358 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.739 [2024-11-20 10:44:49.400280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:08.739 [2024-11-20 10:44:49.410453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166eb760 00:27:08.739 [2024-11-20 10:44:49.411906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:1973 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.739 [2024-11-20 10:44:49.411925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:08.739 [2024-11-20 10:44:49.416993] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e4578 00:27:08.739 [2024-11-20 10:44:49.417815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:12436 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.739 [2024-11-20 10:44:49.417834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:08.739 [2024-11-20 10:44:49.426408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fac10 00:27:08.739 [2024-11-20 10:44:49.427378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:2885 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.739 [2024-11-20 10:44:49.427398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:08.739 [2024-11-20 10:44:49.435477] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f92c0 00:27:08.739 [2024-11-20 10:44:49.435980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:12693 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.739 [2024-11-20 10:44:49.436000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:27:08.739 [2024-11-20 10:44:49.446836] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f20d8 00:27:08.739 [2024-11-20 10:44:49.448362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.739 [2024-11-20 10:44:49.448382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:08.739 [2024-11-20 10:44:49.453263] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f46d0 00:27:08.739 [2024-11-20 10:44:49.454134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:7302 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.739 [2024-11-20 10:44:49.454154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:08.739 [2024-11-20 10:44:49.464597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e88f8 00:27:08.998 [2024-11-20 10:44:49.465945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:7046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.998 [2024-11-20 10:44:49.465966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:27:08.998 [2024-11-20 10:44:49.473914] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f31b8 00:27:08.998 [2024-11-20 10:44:49.475238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:2352 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.998 [2024-11-20 10:44:49.475258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:27:08.998 [2024-11-20 10:44:49.480187] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ec408 00:27:08.998 [2024-11-20 10:44:49.480810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:25014 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.998 [2024-11-20 10:44:49.480829] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:08.998 [2024-11-20 10:44:49.489615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e12d8 00:27:08.998 [2024-11-20 10:44:49.490356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:3108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.998 [2024-11-20 10:44:49.490375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:27:08.998 [2024-11-20 10:44:49.498748] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166de8a8 00:27:08.998 [2024-11-20 10:44:49.499500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:10597 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.998 [2024-11-20 10:44:49.499520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:27:08.998 [2024-11-20 10:44:49.507952] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e8088 00:27:08.998 [2024-11-20 10:44:49.508502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:13418 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.998 [2024-11-20 10:44:49.508522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:27:08.998 [2024-11-20 10:44:49.516257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166eaab8 00:27:08.998 [2024-11-20 10:44:49.516892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:12104 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.998 [2024-11-20 10:44:49.516911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:27:08.998 [2024-11-20 10:44:49.527257] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fda78 00:27:08.998 [2024-11-20 10:44:49.528379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:14386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.998 [2024-11-20 10:44:49.528398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:08.998 [2024-11-20 10:44:49.536324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166df118 00:27:08.998 [2024-11-20 10:44:49.536992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21556 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.998 [2024-11-20 10:44:49.537011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:08.998 [2024-11-20 10:44:49.544550] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fa3a0 00:27:08.998 [2024-11-20 10:44:49.545355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:7299 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.998 [2024-11-20 10:44:49.545374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:08.999 [2024-11-20 10:44:49.553461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166df550 00:27:08.999 [2024-11-20 10:44:49.554140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:7757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.999 [2024-11-20 10:44:49.554159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:08.999 [2024-11-20 10:44:49.562953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e1b48 00:27:08.999 [2024-11-20 10:44:49.563842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:17383 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.999 [2024-11-20 10:44:49.563865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:08.999 27818.00 IOPS, 108.66 MiB/s [2024-11-20T09:44:49.730Z] [2024-11-20 10:44:49.572067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166de8a8 00:27:08.999 [2024-11-20 10:44:49.572862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:308 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.999 [2024-11-20 10:44:49.572881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:08.999 [2024-11-20 10:44:49.581454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e84c0 00:27:08.999 [2024-11-20 10:44:49.582601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:11301 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.999 [2024-11-20 10:44:49.582621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:79 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:08.999 [2024-11-20 10:44:49.589817] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166eea00 00:27:08.999 [2024-11-20 10:44:49.590793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:8509 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.999 [2024-11-20 10:44:49.590813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:08.999 [2024-11-20 10:44:49.599006] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166eaef0 00:27:08.999 [2024-11-20 10:44:49.599935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:5865 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.999 [2024-11-20 10:44:49.599955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:27:08.999 [2024-11-20 10:44:49.608760] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ed4e8 00:27:08.999 [2024-11-20 10:44:49.609818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:24984 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.999 [2024-11-20 10:44:49.609839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:27:08.999 [2024-11-20 10:44:49.618013] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f35f0 00:27:08.999 [2024-11-20 10:44:49.618603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:15730 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.999 [2024-11-20 10:44:49.618624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:08.999 [2024-11-20 10:44:49.627558] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f7100 00:27:08.999 [2024-11-20 10:44:49.628271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:23939 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.999 [2024-11-20 10:44:49.628290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:08.999 [2024-11-20 10:44:49.636125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166eaef0 00:27:08.999 [2024-11-20 10:44:49.637375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:25419 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.999 [2024-11-20 10:44:49.637394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:27:08.999 [2024-11-20 10:44:49.643857] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f20d8 00:27:08.999 [2024-11-20 10:44:49.644533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16512 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.999 [2024-11-20 10:44:49.644553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:27:08.999 [2024-11-20 10:44:49.653268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166de8a8 00:27:08.999 [2024-11-20 10:44:49.654042] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:3060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.999 [2024-11-20 10:44:49.654061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:27:08.999 [2024-11-20 10:44:49.662750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f35f0 00:27:08.999 [2024-11-20 10:44:49.663667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:2425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.999 [2024-11-20 10:44:49.663687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:27:08.999 [2024-11-20 10:44:49.671731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f4298 00:27:08.999 [2024-11-20 10:44:49.672208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:5012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.999 [2024-11-20 10:44:49.672228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:27:08.999 [2024-11-20 10:44:49.682534] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f3a28 00:27:08.999 [2024-11-20 10:44:49.683770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1276 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.999 [2024-11-20 10:44:49.683790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:27:08.999 [2024-11-20 10:44:49.691200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e0630 00:27:08.999 [2024-11-20 10:44:49.692442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:10923 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.999 [2024-11-20 10:44:49.692462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:08.999 [2024-11-20 10:44:49.700799] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e1f80 00:27:08.999 [2024-11-20 10:44:49.702162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:13927 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.999 [2024-11-20 10:44:49.702182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:08.999 [2024-11-20 10:44:49.707285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166eb760 00:27:08.999 [2024-11-20 10:44:49.707963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:19642 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.999 [2024-11-20 10:44:49.707983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:08.999 [2024-11-20 10:44:49.716730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ddc00 00:27:08.999 [2024-11-20 10:44:49.717527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:5138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:08.999 [2024-11-20 10:44:49.717546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:27:09.258 [2024-11-20 10:44:49.726347] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f4f40 00:27:09.258 [2024-11-20 10:44:49.727312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24066 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.258 [2024-11-20 10:44:49.727332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:09.258 [2024-11-20 10:44:49.736099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e6fa8 00:27:09.258 [2024-11-20 10:44:49.736588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:5767 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.258 [2024-11-20 10:44:49.736609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:27:09.258 [2024-11-20 10:44:49.745105] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f0788 00:27:09.258 [2024-11-20 10:44:49.745827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:7203 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.258 [2024-11-20 10:44:49.745847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:09.258 [2024-11-20 10:44:49.754996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e1710 00:27:09.258 [2024-11-20 10:44:49.756149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:3760 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.258 [2024-11-20 10:44:49.756168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:09.258 [2024-11-20 10:44:49.764288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ddc00 00:27:09.258 [2024-11-20 10:44:49.765481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:8663 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.258 [2024-11-20 10:44:49.765501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.772391] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f4f40 00:27:09.259 [2024-11-20 10:44:49.773412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.773431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.781973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e1f80 00:27:09.259 [2024-11-20 10:44:49.783111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:22946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.783131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.791417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f20d8 00:27:09.259 [2024-11-20 10:44:49.792663] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:18359 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.792682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.800840] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fc998 00:27:09.259 [2024-11-20 10:44:49.802225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:22394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.802248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.807345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fd640 00:27:09.259 [2024-11-20 10:44:49.808043] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:10739 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.808063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.818377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166de8a8 00:27:09.259 [2024-11-20 10:44:49.819531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:1406 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.819551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.827780] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f9f68 00:27:09.259 [2024-11-20 10:44:49.829054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:25084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.829074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.837221] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e7818 00:27:09.259 [2024-11-20 10:44:49.838590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:25510 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.838609] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.846647] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166eb328 00:27:09.259 [2024-11-20 10:44:49.848165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:8291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.848185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.853246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ee190 00:27:09.259 [2024-11-20 10:44:49.854106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:622 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.854127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.864566] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f4298 00:27:09.259 [2024-11-20 10:44:49.865796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:8765 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.865818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.871402] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e0ea0 00:27:09.259 [2024-11-20 10:44:49.872118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:9353 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.872137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.882531] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ed920 00:27:09.259 [2024-11-20 10:44:49.883643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:23626 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.883664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.892524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fef90 00:27:09.259 [2024-11-20 10:44:49.893933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:12180 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.893952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.898692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166eaef0 00:27:09.259 [2024-11-20 10:44:49.899400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20781 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.899420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.908349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f4298 00:27:09.259 [2024-11-20 10:44:49.909183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:7105 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.909208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.917822] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e49b0 00:27:09.259 [2024-11-20 10:44:49.918803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:20116 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.918823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.929049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166efae0 00:27:09.259 [2024-11-20 10:44:49.930476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.930497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.938509] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166eb760 00:27:09.259 [2024-11-20 10:44:49.940049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:17773 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.940068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.945066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f20d8 00:27:09.259 [2024-11-20 10:44:49.945912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:3164 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.945931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.954492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fc560 00:27:09.259 [2024-11-20 10:44:49.955456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:7448 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.955474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.964524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f0ff8 00:27:09.259 [2024-11-20 10:44:49.965600] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:24912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.965619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.973865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fc560 00:27:09.259 [2024-11-20 10:44:49.975000] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.975020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:27:09.259 [2024-11-20 10:44:49.981345] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f8618 00:27:09.259 [2024-11-20 10:44:49.982001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:9429 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.259 [2024-11-20 10:44:49.982022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:09.518 [2024-11-20 10:44:49.990495] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ed920 00:27:09.518 [2024-11-20 10:44:49.991260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:6305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.518 [2024-11-20 10:44:49.991280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:28 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:09.518 [2024-11-20 10:44:49.999378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ed920 00:27:09.518 [2024-11-20 10:44:50.000119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:11360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.518 [2024-11-20 10:44:50.000140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:09.518 [2024-11-20 10:44:50.007949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e0630 00:27:09.518 [2024-11-20 10:44:50.008679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6677 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.518 [2024-11-20 10:44:50.008699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:09.518 [2024-11-20 10:44:50.020451] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f0ff8 00:27:09.518 [2024-11-20 10:44:50.021900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:9501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.518 [2024-11-20 10:44:50.021920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:09.518 [2024-11-20 10:44:50.027127] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166de470 00:27:09.518 [2024-11-20 10:44:50.027866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:17128 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.518 [2024-11-20 10:44:50.027886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:27:09.518 [2024-11-20 10:44:50.039197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fc560 00:27:09.518 [2024-11-20 10:44:50.040603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:7326 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.518 [2024-11-20 10:44:50.040631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:27:09.518 [2024-11-20 10:44:50.048118] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f3e60 00:27:09.518 [2024-11-20 10:44:50.048818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:16240 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.518 [2024-11-20 10:44:50.048839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:27:09.518 [2024-11-20 10:44:50.057248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f2948 00:27:09.518 [2024-11-20 10:44:50.058162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:16527 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.058185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.068986] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ef6a8 00:27:09.519 [2024-11-20 10:44:50.070492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:9366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.070514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.076478] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f7100 00:27:09.519 [2024-11-20 10:44:50.077109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:23903 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.077130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.086600] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f3e60 00:27:09.519 [2024-11-20 10:44:50.087305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:23553 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.087325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:16 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.096490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f4f40 00:27:09.519 [2024-11-20 10:44:50.097188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:14329 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.097214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:48 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.107183] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fb480 00:27:09.519 [2024-11-20 10:44:50.108362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:14150 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.108383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.116593] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e73e0 00:27:09.519 [2024-11-20 10:44:50.117751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:8809 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.117772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.125180] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f3e60 00:27:09.519 [2024-11-20 10:44:50.126215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:24090 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.126235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.132958] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f46d0 00:27:09.519 [2024-11-20 10:44:50.133528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:16280 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.133548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:21 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.144152] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ee5c8 00:27:09.519 [2024-11-20 10:44:50.145426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:15233 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.145447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.150683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e01f8 00:27:09.519 [2024-11-20 10:44:50.151309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:16883 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.151329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.161281] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ed0b0 00:27:09.519 [2024-11-20 10:44:50.162157] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:25084 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.162177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.170338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f1868 00:27:09.519 [2024-11-20 10:44:50.171135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17691 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.171157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.178784] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ed920 00:27:09.519 [2024-11-20 10:44:50.179599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:19135 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.179619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.187674] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ebfd0 00:27:09.519 [2024-11-20 10:44:50.188265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:25247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.188286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.198996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ed0b0 00:27:09.519 [2024-11-20 10:44:50.200360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:25561 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.200381] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.208350] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f6cc8 00:27:09.519 [2024-11-20 10:44:50.209721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:16799 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.209741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.216269] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fef90 00:27:09.519 [2024-11-20 10:44:50.217191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17771 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.217215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.224672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ecc78 00:27:09.519 [2024-11-20 10:44:50.225594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:13701 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.225614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.233969] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f7100 00:27:09.519 [2024-11-20 10:44:50.234446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:13778 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.234467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:27:09.519 [2024-11-20 10:44:50.243610] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fe720 00:27:09.519 [2024-11-20 10:44:50.244211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:17330 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.519 [2024-11-20 10:44:50.244232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:09.778 [2024-11-20 10:44:50.253044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166ee190 00:27:09.778 [2024-11-20 10:44:50.253973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:15581 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.778 [2024-11-20 10:44:50.253993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:09.778 [2024-11-20 10:44:50.262225] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fcdd0 00:27:09.778 [2024-11-20 10:44:50.263107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:15403 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.778 [2024-11-20 10:44:50.263126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:27:09.778 [2024-11-20 10:44:50.270943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fd640 00:27:09.778 [2024-11-20 10:44:50.271790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:9722 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.778 [2024-11-20 10:44:50.271811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:27:09.778 [2024-11-20 10:44:50.280136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fc560 00:27:09.778 [2024-11-20 10:44:50.281036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:11599 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.778 [2024-11-20 10:44:50.281058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:09.778 [2024-11-20 10:44:50.289765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f1868 00:27:09.778 [2024-11-20 10:44:50.290824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.778 [2024-11-20 10:44:50.290843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:27:09.778 [2024-11-20 10:44:50.299107] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e5220 00:27:09.778 [2024-11-20 10:44:50.299747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20100 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.778 [2024-11-20 10:44:50.299767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:27:09.778 [2024-11-20 10:44:50.308088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f1ca0 00:27:09.778 [2024-11-20 10:44:50.308975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:8272 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.778 [2024-11-20 10:44:50.308996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:09.778 [2024-11-20 10:44:50.317000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f1868 00:27:09.778 [2024-11-20 10:44:50.317632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:10726 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.778 [2024-11-20 10:44:50.317652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:09.778 [2024-11-20 10:44:50.326088] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e1b48 00:27:09.778 [2024-11-20 10:44:50.326735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:9263 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.778 [2024-11-20 10:44:50.326756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:09.778 [2024-11-20 10:44:50.335789] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e9168 00:27:09.778 [2024-11-20 10:44:50.336665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:15609 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.778 [2024-11-20 10:44:50.336685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:27:09.778 [2024-11-20 10:44:50.347337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e2c28 00:27:09.778 [2024-11-20 10:44:50.348776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:23752 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.778 [2024-11-20 10:44:50.348797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:27:09.778 [2024-11-20 10:44:50.353972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f0bc0 00:27:09.778 [2024-11-20 10:44:50.354662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:22612 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.778 [2024-11-20 10:44:50.354682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:09.778 [2024-11-20 10:44:50.363298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f96f8 00:27:09.778 [2024-11-20 10:44:50.364001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:12657 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.778 [2024-11-20 10:44:50.364020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:27:09.778 [2024-11-20 10:44:50.374334] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f35f0 00:27:09.778 [2024-11-20 10:44:50.375463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:4700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.778 [2024-11-20 10:44:50.375484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:27:09.778 [2024-11-20 10:44:50.382320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e0ea0 00:27:09.778 [2024-11-20 10:44:50.382978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:13633 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.778 [2024-11-20 10:44:50.382998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:09.778 [2024-11-20 10:44:50.391559] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e0ea0 00:27:09.778 [2024-11-20 10:44:50.392193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:230 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.779 [2024-11-20 10:44:50.392217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:09.779 [2024-11-20 10:44:50.400625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fd640 00:27:09.779 [2024-11-20 10:44:50.401284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:19008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.779 [2024-11-20 10:44:50.401305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:27:09.779 [2024-11-20 10:44:50.410194] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166eee38 00:27:09.779 [2024-11-20 10:44:50.411052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:16884 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.779 [2024-11-20 10:44:50.411072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:09.779 [2024-11-20 10:44:50.419258] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f96f8 00:27:09.779 [2024-11-20 10:44:50.420104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:25488 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.779 [2024-11-20 10:44:50.420122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:27:09.779 [2024-11-20 10:44:50.428349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f46d0 00:27:09.779 [2024-11-20 10:44:50.429285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:19290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.779 [2024-11-20 10:44:50.429305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:27:09.779 [2024-11-20 10:44:50.437989] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e73e0 00:27:09.779 [2024-11-20 10:44:50.439031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:13989 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.779 [2024-11-20 10:44:50.439050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:27:09.779 [2024-11-20 10:44:50.447567] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f6890 00:27:09.779 [2024-11-20 10:44:50.448736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:3447 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.779 [2024-11-20 10:44:50.448757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:27:09.779 [2024-11-20 10:44:50.457170] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166fda78 00:27:09.779 [2024-11-20 10:44:50.458470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:14195 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.779 [2024-11-20 10:44:50.458490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:18 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:27:09.779 [2024-11-20 10:44:50.466724] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f35f0 00:27:09.779 [2024-11-20 10:44:50.468091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:13533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.779 [2024-11-20 10:44:50.468111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:97 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:27:09.779 [2024-11-20 10:44:50.475173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f8a50 00:27:09.779 [2024-11-20 10:44:50.476212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:16282 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.779 [2024-11-20 10:44:50.476247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:09.779 [2024-11-20 10:44:50.485407] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166de8a8 00:27:09.779 [2024-11-20 10:44:50.486902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:19209 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.779 [2024-11-20 10:44:50.486922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:27:09.779 [2024-11-20 10:44:50.491796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e0630 00:27:09.779 [2024-11-20 10:44:50.492529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:24732 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.779 [2024-11-20 10:44:50.492548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:27:09.779 [2024-11-20 10:44:50.501805] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f7da8 00:27:09.779 [2024-11-20 10:44:50.503096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:525 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:09.779 [2024-11-20 10:44:50.503118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:27:10.037 [2024-11-20 10:44:50.510481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e84c0 00:27:10.037 [2024-11-20 10:44:50.511180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:23817 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.037 [2024-11-20 10:44:50.511200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:10.037 [2024-11-20 10:44:50.519968] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f6cc8 00:27:10.037 [2024-11-20 10:44:50.520833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:6977 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.037 [2024-11-20 10:44:50.520859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:93 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:27:10.037 [2024-11-20 10:44:50.528622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f8618 00:27:10.037 [2024-11-20 10:44:50.529430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:6318 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.037 [2024-11-20 10:44:50.529450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:27:10.037 [2024-11-20 10:44:50.538771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f0788 00:27:10.037 [2024-11-20 10:44:50.539719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.037 [2024-11-20 10:44:50.539739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:10.037 [2024-11-20 10:44:50.548252] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f9f68 00:27:10.037 [2024-11-20 10:44:50.549230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:10187 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.037 [2024-11-20 10:44:50.549250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:27:10.037 [2024-11-20 10:44:50.556737] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166e7c50 00:27:10.037 [2024-11-20 10:44:50.557665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:15141 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.037 [2024-11-20 10:44:50.557684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:27:10.037 [2024-11-20 10:44:50.565516] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7640) with pdu=0x2000166f35f0 00:27:10.037 [2024-11-20 10:44:50.566428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:2363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:10.037 [2024-11-20 10:44:50.566448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:27:10.037 27808.50 IOPS, 108.63 MiB/s 00:27:10.037 Latency(us) 00:27:10.037 [2024-11-20T09:44:50.768Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.037 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:27:10.037 nvme0n1 : 2.01 27816.41 108.66 0.00 0.00 4595.79 2075.31 12732.71 00:27:10.037 [2024-11-20T09:44:50.768Z] =================================================================================================================== 00:27:10.037 [2024-11-20T09:44:50.768Z] Total : 27816.41 108.66 0.00 0.00 4595.79 2075.31 12732.71 00:27:10.037 { 00:27:10.037 "results": [ 00:27:10.037 { 00:27:10.037 "job": "nvme0n1", 00:27:10.037 "core_mask": "0x2", 00:27:10.037 "workload": "randwrite", 00:27:10.037 "status": "finished", 00:27:10.037 "queue_depth": 128, 00:27:10.037 "io_size": 4096, 00:27:10.037 "runtime": 2.006334, 00:27:10.037 "iops": 27816.40544395898, 00:27:10.037 "mibps": 108.65783376546477, 00:27:10.037 "io_failed": 0, 00:27:10.037 "io_timeout": 0, 00:27:10.037 "avg_latency_us": 4595.792180302034, 00:27:10.037 "min_latency_us": 2075.306666666667, 00:27:10.037 "max_latency_us": 12732.708571428571 00:27:10.037 } 00:27:10.037 ], 00:27:10.037 "core_count": 1 00:27:10.037 } 00:27:10.037 10:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:10.037 10:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:10.037 10:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:10.037 | .driver_specific 00:27:10.037 | .nvme_error 00:27:10.037 | .status_code 00:27:10.037 | .command_transient_transport_error' 00:27:10.037 10:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:10.295 10:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 218 > 0 )) 00:27:10.295 10:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3372713 00:27:10.295 10:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3372713 ']' 00:27:10.295 10:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3372713 00:27:10.295 10:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:10.295 10:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:10.295 10:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3372713 00:27:10.295 10:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:10.295 10:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:10.295 10:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3372713' 00:27:10.295 killing process with pid 3372713 00:27:10.295 10:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3372713 00:27:10.295 Received shutdown signal, test time was about 2.000000 seconds 00:27:10.295 00:27:10.295 Latency(us) 00:27:10.295 [2024-11-20T09:44:51.026Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:10.295 [2024-11-20T09:44:51.026Z] =================================================================================================================== 00:27:10.295 [2024-11-20T09:44:51.026Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:10.295 10:44:50 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3372713 00:27:10.295 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:27:10.295 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:27:10.295 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:27:10.295 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:27:10.295 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:27:10.295 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=3373191 00:27:10.295 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 3373191 /var/tmp/bperf.sock 00:27:10.295 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:27:10.295 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@835 -- # '[' -z 3373191 ']' 00:27:10.295 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:27:10.295 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:10.295 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:27:10.295 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:27:10.295 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:10.295 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:10.553 [2024-11-20 10:44:51.063078] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:27:10.553 [2024-11-20 10:44:51.063137] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3373191 ] 00:27:10.553 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:10.553 Zero copy mechanism will not be used. 00:27:10.553 [2024-11-20 10:44:51.135536] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:10.553 [2024-11-20 10:44:51.172575] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:10.553 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:10.553 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@868 -- # return 0 00:27:10.553 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:10.553 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:27:10.810 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:27:10.810 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:10.810 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:10.810 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:10.810 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:10.810 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:27:11.377 nvme0n1 00:27:11.377 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:27:11.377 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:11.377 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:11.377 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:11.377 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:27:11.377 10:44:51 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:27:11.377 I/O size of 131072 is greater than zero copy threshold (65536). 00:27:11.377 Zero copy mechanism will not be used. 00:27:11.377 Running I/O for 2 seconds... 00:27:11.377 [2024-11-20 10:44:51.922779] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.377 [2024-11-20 10:44:51.922847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.377 [2024-11-20 10:44:51.922877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.377 [2024-11-20 10:44:51.929022] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.377 [2024-11-20 10:44:51.929092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.377 [2024-11-20 10:44:51.929115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.377 [2024-11-20 10:44:51.933670] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.377 [2024-11-20 10:44:51.933727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.377 [2024-11-20 10:44:51.933748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.377 [2024-11-20 10:44:51.938238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.377 [2024-11-20 10:44:51.938296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.377 [2024-11-20 10:44:51.938316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.377 [2024-11-20 10:44:51.942981] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.377 [2024-11-20 10:44:51.943046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.377 [2024-11-20 10:44:51.943065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.377 [2024-11-20 10:44:51.947458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.377 [2024-11-20 10:44:51.947562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.377 [2024-11-20 10:44:51.947581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.377 [2024-11-20 10:44:51.951951] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.377 [2024-11-20 10:44:51.952056] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.377 [2024-11-20 10:44:51.952075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.377 [2024-11-20 10:44:51.956417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.377 [2024-11-20 10:44:51.956491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.377 [2024-11-20 10:44:51.956511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.377 [2024-11-20 10:44:51.960835] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.377 [2024-11-20 10:44:51.960901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.377 [2024-11-20 10:44:51.960920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.377 [2024-11-20 10:44:51.965385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.377 [2024-11-20 10:44:51.965447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.377 [2024-11-20 10:44:51.965466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.377 [2024-11-20 10:44:51.970125] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.377 [2024-11-20 10:44:51.970185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.377 [2024-11-20 10:44:51.970210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.377 [2024-11-20 10:44:51.975273] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.377 [2024-11-20 10:44:51.975332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.377 [2024-11-20 10:44:51.975350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.377 [2024-11-20 10:44:51.980743] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.377 [2024-11-20 10:44:51.980801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.377 [2024-11-20 10:44:51.980819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.377 [2024-11-20 10:44:51.985808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.377 [2024-11-20 10:44:51.985917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.377 [2024-11-20 10:44:51.985935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.377 [2024-11-20 10:44:51.990538] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.377 [2024-11-20 10:44:51.990815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.377 [2024-11-20 10:44:51.990836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.377 [2024-11-20 10:44:51.995089] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.377 [2024-11-20 10:44:51.995365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.377 [2024-11-20 10:44:51.995385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.377 [2024-11-20 10:44:51.999522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.377 [2024-11-20 10:44:51.999805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:51.999826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.003750] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.004033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.004053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.007944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.008218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.008239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.012436] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.012718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.012742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.016944] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.017224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.017245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.021349] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.021626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.021646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.025455] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.025744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.025764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.029541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.029832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.029853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.033687] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.033968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.033989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.037757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.038031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.038051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.041837] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.042108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.042130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.045934] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.046215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.046235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.050094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.050392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.050412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.054236] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.054532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.054552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.058377] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.058652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.058672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.062585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.062858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.062878] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.066767] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.067033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.067054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.071280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.071561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.071582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.076937] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.077319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.077339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.082662] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.082938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.082960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.087732] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.088027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.378 [2024-11-20 10:44:52.088047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.378 [2024-11-20 10:44:52.092156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.378 [2024-11-20 10:44:52.092430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.379 [2024-11-20 10:44:52.092451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.379 [2024-11-20 10:44:52.097136] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.379 [2024-11-20 10:44:52.097532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.379 [2024-11-20 10:44:52.097553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.379 [2024-11-20 10:44:52.103229] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.379 [2024-11-20 10:44:52.103547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.379 [2024-11-20 10:44:52.103567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.638 [2024-11-20 10:44:52.108410] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.638 [2024-11-20 10:44:52.108682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.638 [2024-11-20 10:44:52.108703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.638 [2024-11-20 10:44:52.113649] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.113936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.113956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.118435] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.118709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.118729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.123414] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.123599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20320 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.123618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.129420] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.129683] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.129704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.134302] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.134522] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.134545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.139031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.139296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.139316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.143953] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.144198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.144224] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.149246] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.149469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.149489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.153928] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.154169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.154189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.158485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.158687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.158707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.162479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.162636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.162656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.166427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.166597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.166617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.170195] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.170376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.170395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.174050] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.174258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.174277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.178111] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.178304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.178322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.181997] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.182180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.182199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.185745] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.185928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.185946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.189434] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.189621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.189640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.193121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.193309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.193327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.196825] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.197010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.197028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.200535] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.200707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.200728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.204219] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.204408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.204428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.207918] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.208061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.639 [2024-11-20 10:44:52.208081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.639 [2024-11-20 10:44:52.211585] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.639 [2024-11-20 10:44:52.211738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.211756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.215232] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.215375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.215393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.219000] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.219141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.219159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.223207] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.223334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.223352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.227938] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.228098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.228119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.232898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.232982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.233000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.238524] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.238668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.238688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.244462] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.244581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.244604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.250450] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.250578] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.250596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.256408] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.256508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.256527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.262167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.262277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.262296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.268028] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.268142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.268161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.273913] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.273967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.273986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.278627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.278728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.278748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.282978] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.283123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.283141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.287460] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.287632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.287651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.291965] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.292065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.292084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.296791] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.296905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.296924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.301243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.301366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.301384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.305722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.305833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.305852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.310393] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.310523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.310542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.315231] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.315347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.315365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.319781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.319893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.319912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.323917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.640 [2024-11-20 10:44:52.324041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.640 [2024-11-20 10:44:52.324060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.640 [2024-11-20 10:44:52.328267] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.641 [2024-11-20 10:44:52.328397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.641 [2024-11-20 10:44:52.328416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.641 [2024-11-20 10:44:52.332633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.641 [2024-11-20 10:44:52.332764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.641 [2024-11-20 10:44:52.332782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.641 [2024-11-20 10:44:52.336800] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.641 [2024-11-20 10:44:52.336936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.641 [2024-11-20 10:44:52.336954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.641 [2024-11-20 10:44:52.341485] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.641 [2024-11-20 10:44:52.341589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.641 [2024-11-20 10:44:52.341607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.641 [2024-11-20 10:44:52.346815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.641 [2024-11-20 10:44:52.346978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.641 [2024-11-20 10:44:52.346997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.641 [2024-11-20 10:44:52.353200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.641 [2024-11-20 10:44:52.353304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.641 [2024-11-20 10:44:52.353323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.641 [2024-11-20 10:44:52.359017] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.641 [2024-11-20 10:44:52.359250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.641 [2024-11-20 10:44:52.359270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.900 [2024-11-20 10:44:52.365244] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.900 [2024-11-20 10:44:52.365414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.900 [2024-11-20 10:44:52.365433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.900 [2024-11-20 10:44:52.371119] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.900 [2024-11-20 10:44:52.371323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.900 [2024-11-20 10:44:52.371342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.900 [2024-11-20 10:44:52.377899] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.900 [2024-11-20 10:44:52.378016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.900 [2024-11-20 10:44:52.378038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.900 [2024-11-20 10:44:52.383031] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.900 [2024-11-20 10:44:52.383171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.900 [2024-11-20 10:44:52.383191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.900 [2024-11-20 10:44:52.386896] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.900 [2024-11-20 10:44:52.387021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.900 [2024-11-20 10:44:52.387040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.900 [2024-11-20 10:44:52.390642] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.900 [2024-11-20 10:44:52.390789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.900 [2024-11-20 10:44:52.390808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.900 [2024-11-20 10:44:52.394331] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.900 [2024-11-20 10:44:52.394469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.900 [2024-11-20 10:44:52.394488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.900 [2024-11-20 10:44:52.398058] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.900 [2024-11-20 10:44:52.398206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.900 [2024-11-20 10:44:52.398225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.900 [2024-11-20 10:44:52.401763] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.900 [2024-11-20 10:44:52.401912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.900 [2024-11-20 10:44:52.401930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.900 [2024-11-20 10:44:52.405458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.900 [2024-11-20 10:44:52.405606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.405624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.409165] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.409326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.409344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.413274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.413395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.413413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.417617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.417750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.417767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.422265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.422375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.422393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.426719] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.426856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.426874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.430964] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.431144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.431163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.435523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.435639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.435658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.440066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.440212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.440231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.444547] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.444685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.444703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.449029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.449138] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.449157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.453413] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.453524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.453543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.457672] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.457791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.457809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.462354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.462476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.462495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.466843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.466988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.467007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.471324] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.471430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.471449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.475886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.476015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.476033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.480406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.480540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.480558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.485041] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.485154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.485173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.489522] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.489656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.489678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.493905] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.494006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.494025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.498496] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.498625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.498644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.502996] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.503128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.503146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.507796] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.507914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.507933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.512217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.512354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.512373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.517333] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.901 [2024-11-20 10:44:52.517470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.901 [2024-11-20 10:44:52.517488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.901 [2024-11-20 10:44:52.521384] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.521510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.521528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.525162] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.525308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15072 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.525327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.528897] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.529040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.529059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.532597] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.532738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.532756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.536288] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.536426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.536445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.539943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.540075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.540093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.544173] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.544269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.544288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.549886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.549987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.550006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.554845] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.555111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.555132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.560956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.561094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.561113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.567161] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.567311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.567330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.573018] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.573115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.573134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.578367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.578469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.578489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.582798] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.582918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.582937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.587154] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.587278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.587297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.591541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.591647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.591665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.596449] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.596584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.596602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.600832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.600937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.600956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.605151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.605286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.605305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.609973] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.610068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.610090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.614318] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.614453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.614472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.618886] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.619014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.619033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:11.902 [2024-11-20 10:44:52.623212] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:11.902 [2024-11-20 10:44:52.623322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:11.902 [2024-11-20 10:44:52.623340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.162 [2024-11-20 10:44:52.627528] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.162 [2024-11-20 10:44:52.627651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.162 [2024-11-20 10:44:52.627669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.162 [2024-11-20 10:44:52.631889] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.162 [2024-11-20 10:44:52.631999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.162 [2024-11-20 10:44:52.632018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.162 [2024-11-20 10:44:52.636589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.162 [2024-11-20 10:44:52.636716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.162 [2024-11-20 10:44:52.636734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.162 [2024-11-20 10:44:52.641260] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.162 [2024-11-20 10:44:52.641341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.162 [2024-11-20 10:44:52.641361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.162 [2024-11-20 10:44:52.645735] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.162 [2024-11-20 10:44:52.645842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.162 [2024-11-20 10:44:52.645860] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.162 [2024-11-20 10:44:52.649925] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.162 [2024-11-20 10:44:52.650039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10528 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.162 [2024-11-20 10:44:52.650057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.162 [2024-11-20 10:44:52.654523] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.162 [2024-11-20 10:44:52.654629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.162 [2024-11-20 10:44:52.654648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.162 [2024-11-20 10:44:52.659049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.162 [2024-11-20 10:44:52.659164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.162 [2024-11-20 10:44:52.659183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.162 [2024-11-20 10:44:52.663461] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.162 [2024-11-20 10:44:52.663551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.162 [2024-11-20 10:44:52.663570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.162 [2024-11-20 10:44:52.667464] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.162 [2024-11-20 10:44:52.667582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15360 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.162 [2024-11-20 10:44:52.667601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.162 [2024-11-20 10:44:52.671315] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.162 [2024-11-20 10:44:52.671459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.162 [2024-11-20 10:44:52.671478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.162 [2024-11-20 10:44:52.675121] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.162 [2024-11-20 10:44:52.675268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.162 [2024-11-20 10:44:52.675287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.162 [2024-11-20 10:44:52.679132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.162 [2024-11-20 10:44:52.679287] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.162 [2024-11-20 10:44:52.679307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.162 [2024-11-20 10:44:52.683066] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.162 [2024-11-20 10:44:52.683197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.683222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.687064] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.687214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.687233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.691061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.691182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.691207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.695274] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.695412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.695432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.699265] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.699408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.699427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.703151] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.703307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.703325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.707484] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.707621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.707640] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.711491] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.711626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.711645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.715548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.715685] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.715703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.719615] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.719727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.719753] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.723521] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.723658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.723677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.727312] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.727431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.727450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.731316] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.731447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.731466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.735527] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.735650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.735669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.739982] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.740094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.740113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.744190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.744313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.744332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.748689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.748814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.748833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.752891] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.752993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.753012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.757782] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.757875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.757894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.762493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.762597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.762616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.766916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.766997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.767016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.163 [2024-11-20 10:44:52.771640] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.163 [2024-11-20 10:44:52.771751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.163 [2024-11-20 10:44:52.771770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.775699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.775818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.775837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.779609] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.779736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.779755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.783518] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.783659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.783678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.787759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.787874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.787894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.791690] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.791834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.791852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.795544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.795658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.795677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.799506] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.799656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.799674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.804238] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.804339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.804358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.808454] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.808594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.808613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.812452] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.812575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.812593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.816360] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.816495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.816514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.820176] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.820327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.820347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.824104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.824235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.824253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.828233] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.828370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.828393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.832143] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.832271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.832290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.835893] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.836027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.836046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.839683] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.839808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.839826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.844247] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.844378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.844396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.848751] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.848887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.848907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.852713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.852841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.852859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.856626] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.856744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.856764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.860583] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.860717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.860737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.864476] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.864616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.864636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.868367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.164 [2024-11-20 10:44:52.868502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.164 [2024-11-20 10:44:52.868522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.164 [2024-11-20 10:44:52.872190] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.165 [2024-11-20 10:44:52.872345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.165 [2024-11-20 10:44:52.872363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.165 [2024-11-20 10:44:52.875991] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.165 [2024-11-20 10:44:52.876104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.165 [2024-11-20 10:44:52.876123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.165 [2024-11-20 10:44:52.880153] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.165 [2024-11-20 10:44:52.880283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.165 [2024-11-20 10:44:52.880303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.165 [2024-11-20 10:44:52.884843] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.165 [2024-11-20 10:44:52.884967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.165 [2024-11-20 10:44:52.884986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.424 [2024-11-20 10:44:52.889015] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.424 [2024-11-20 10:44:52.889137] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.424 [2024-11-20 10:44:52.889157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.424 [2024-11-20 10:44:52.893532] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.424 [2024-11-20 10:44:52.893650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.424 [2024-11-20 10:44:52.893669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.424 [2024-11-20 10:44:52.897947] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.898048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.898067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.902425] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.902549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.902568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.906888] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.906999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.907018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.911622] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.911729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.911748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.915757] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.915879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.915898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.919766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.919912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.919930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.425 6923.00 IOPS, 865.38 MiB/s [2024-11-20T09:44:53.156Z] [2024-11-20 10:44:52.925067] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.925199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.925227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.928917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.929059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.929077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.932769] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.932921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23552 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.932940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.936692] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.936820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.936839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.940771] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.940882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.940901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.945560] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.945710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.945730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.949832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.949956] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.949974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.953869] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.954008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.954027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.957903] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.958034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.958053] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.961764] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.961903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.961922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.965633] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.965775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.965793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.969604] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.969741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.969759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.973823] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.973947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.973965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.978371] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.978503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.978522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.982367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.982500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.982519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.986280] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.986403] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.986422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.990185] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.990333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.990353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.994098] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.994244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.994263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:52.997865] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:52.998002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:52.998021] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:53.001727] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:53.001845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.425 [2024-11-20 10:44:53.001863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.425 [2024-11-20 10:44:53.006355] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.425 [2024-11-20 10:44:53.006490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.006512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.010586] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.010727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.010746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.014467] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.014613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.014631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.018389] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.018520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.018538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.022266] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.022398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.022416] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.026042] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.026170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.026188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.029971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.030108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.030128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.033901] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.034052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.034070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.037729] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.037869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.037887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.041453] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.041602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.041623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.045598] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.045767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.045785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.049541] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.049704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2272 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.049722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.054308] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.054543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.054564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.059916] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.060115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.060134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.066056] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.066237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.066256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.072243] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.072466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.072488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.078337] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.078604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.078624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.085277] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.085524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.085545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.091330] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.091490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.091510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.097730] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.097959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.097980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.103826] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.104076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.104097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.110189] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.110334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.110353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.116569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.116707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.116726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.122759] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.122986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.123005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.128900] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.129103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.129122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.135544] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.135715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.135734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.141603] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.426 [2024-11-20 10:44:53.141841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.426 [2024-11-20 10:44:53.141865] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.426 [2024-11-20 10:44:53.146713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.427 [2024-11-20 10:44:53.146838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.427 [2024-11-20 10:44:53.146856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.427 [2024-11-20 10:44:53.150657] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.686 [2024-11-20 10:44:53.150822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.686 [2024-11-20 10:44:53.150842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.686 [2024-11-20 10:44:53.154394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.686 [2024-11-20 10:44:53.154535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.686 [2024-11-20 10:44:53.154555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.686 [2024-11-20 10:44:53.158156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.686 [2024-11-20 10:44:53.158356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.686 [2024-11-20 10:44:53.158374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.686 [2024-11-20 10:44:53.161815] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.686 [2024-11-20 10:44:53.161985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.686 [2024-11-20 10:44:53.162004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.686 [2024-11-20 10:44:53.165468] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.686 [2024-11-20 10:44:53.165634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.686 [2024-11-20 10:44:53.165652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.686 [2024-11-20 10:44:53.169095] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.687 [2024-11-20 10:44:53.169278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.687 [2024-11-20 10:44:53.169297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.687 [2024-11-20 10:44:53.172722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.687 [2024-11-20 10:44:53.172892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.687 [2024-11-20 10:44:53.172910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.687 [2024-11-20 10:44:53.176356] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.687 [2024-11-20 10:44:53.176519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.687 [2024-11-20 10:44:53.176537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.687 [2024-11-20 10:44:53.179979] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.687 [2024-11-20 10:44:53.180155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.687 [2024-11-20 10:44:53.180174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.687 [2024-11-20 10:44:53.183591] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.687 [2024-11-20 10:44:53.183757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.687 [2024-11-20 10:44:53.183776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.687 [2024-11-20 10:44:53.187338] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.687 [2024-11-20 10:44:53.187491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.687 [2024-11-20 10:44:53.187511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.687 [2024-11-20 10:44:53.191049] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.687 [2024-11-20 10:44:53.191221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.687 [2024-11-20 10:44:53.191239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.687 [2024-11-20 10:44:53.194700] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.687 [2024-11-20 10:44:53.194867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.687 [2024-11-20 10:44:53.194886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.687 [2024-11-20 10:44:53.198387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.687 [2024-11-20 10:44:53.198559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.687 [2024-11-20 10:44:53.198577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.687 [2024-11-20 10:44:53.202026] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.687 [2024-11-20 10:44:53.202198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3008 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.687 [2024-11-20 10:44:53.202225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.687 [2024-11-20 10:44:53.205725] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.687 [2024-11-20 10:44:53.205890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.687 [2024-11-20 10:44:53.205908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.687 [2024-11-20 10:44:53.209401] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.687 [2024-11-20 10:44:53.209566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.687 [2024-11-20 10:44:53.209586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.687 [2024-11-20 10:44:53.213082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.687 [2024-11-20 10:44:53.213243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.687 [2024-11-20 10:44:53.213261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.687 [2024-11-20 10:44:53.216963] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.687 [2024-11-20 10:44:53.217116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.687 [2024-11-20 10:44:53.217134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.687 [2024-11-20 10:44:53.221705] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.687 [2024-11-20 10:44:53.221759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.687 [2024-11-20 10:44:53.221776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.687 [2024-11-20 10:44:53.226021] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.687 [2024-11-20 10:44:53.226176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.687 [2024-11-20 10:44:53.226194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.687 [2024-11-20 10:44:53.230011] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.687 [2024-11-20 10:44:53.230164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.687 [2024-11-20 10:44:53.230183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.687 [2024-11-20 10:44:53.234052] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.688 [2024-11-20 10:44:53.234216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.688 [2024-11-20 10:44:53.234235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.688 [2024-11-20 10:44:53.237972] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.688 [2024-11-20 10:44:53.238108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.688 [2024-11-20 10:44:53.238126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.688 [2024-11-20 10:44:53.241927] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.688 [2024-11-20 10:44:53.242075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19104 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.688 [2024-11-20 10:44:53.242096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.688 [2024-11-20 10:44:53.245795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.688 [2024-11-20 10:44:53.245942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.688 [2024-11-20 10:44:53.245961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.688 [2024-11-20 10:44:53.249569] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.688 [2024-11-20 10:44:53.249729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.688 [2024-11-20 10:44:53.249750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.688 [2024-11-20 10:44:53.253394] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.688 [2024-11-20 10:44:53.253550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.688 [2024-11-20 10:44:53.253570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.688 [2024-11-20 10:44:53.257428] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.688 [2024-11-20 10:44:53.257554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.688 [2024-11-20 10:44:53.257575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.688 [2024-11-20 10:44:53.262167] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.688 [2024-11-20 10:44:53.262293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.688 [2024-11-20 10:44:53.262313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.688 [2024-11-20 10:44:53.266392] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.688 [2024-11-20 10:44:53.266540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.688 [2024-11-20 10:44:53.266560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.688 [2024-11-20 10:44:53.270234] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.688 [2024-11-20 10:44:53.270410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.688 [2024-11-20 10:44:53.270430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.688 [2024-11-20 10:44:53.274091] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.688 [2024-11-20 10:44:53.274271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.688 [2024-11-20 10:44:53.274292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.688 [2024-11-20 10:44:53.277926] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.688 [2024-11-20 10:44:53.278081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.688 [2024-11-20 10:44:53.278101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.688 [2024-11-20 10:44:53.281832] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.688 [2024-11-20 10:44:53.281995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.688 [2024-11-20 10:44:53.282015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.688 [2024-11-20 10:44:53.285629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.688 [2024-11-20 10:44:53.285787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.688 [2024-11-20 10:44:53.285807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.688 [2024-11-20 10:44:53.289382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.688 [2024-11-20 10:44:53.289544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.688 [2024-11-20 10:44:53.289562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.688 [2024-11-20 10:44:53.293287] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.688 [2024-11-20 10:44:53.293434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.688 [2024-11-20 10:44:53.293452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.688 [2024-11-20 10:44:53.297103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.688 [2024-11-20 10:44:53.297274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.688 [2024-11-20 10:44:53.297292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.688 [2024-11-20 10:44:53.300870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.688 [2024-11-20 10:44:53.301023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.688 [2024-11-20 10:44:53.301041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.688 [2024-11-20 10:44:53.305024] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.688 [2024-11-20 10:44:53.305150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.689 [2024-11-20 10:44:53.305168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.689 [2024-11-20 10:44:53.309676] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.689 [2024-11-20 10:44:53.309821] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.689 [2024-11-20 10:44:53.309840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.689 [2024-11-20 10:44:53.313623] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.689 [2024-11-20 10:44:53.313766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.689 [2024-11-20 10:44:53.313785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.689 [2024-11-20 10:44:53.317548] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.689 [2024-11-20 10:44:53.317702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.689 [2024-11-20 10:44:53.317720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.689 [2024-11-20 10:44:53.321422] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.689 [2024-11-20 10:44:53.321571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.689 [2024-11-20 10:44:53.321589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.689 [2024-11-20 10:44:53.325259] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.689 [2024-11-20 10:44:53.325409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.689 [2024-11-20 10:44:53.325427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.689 [2024-11-20 10:44:53.329103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.689 [2024-11-20 10:44:53.329265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.689 [2024-11-20 10:44:53.329283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.689 [2024-11-20 10:44:53.332849] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.689 [2024-11-20 10:44:53.332990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.689 [2024-11-20 10:44:53.333009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.689 [2024-11-20 10:44:53.336682] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.689 [2024-11-20 10:44:53.336846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.689 [2024-11-20 10:44:53.336864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.689 [2024-11-20 10:44:53.340417] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.689 [2024-11-20 10:44:53.340573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.689 [2024-11-20 10:44:53.340592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.689 [2024-11-20 10:44:53.344629] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.689 [2024-11-20 10:44:53.344769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.689 [2024-11-20 10:44:53.344790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.689 [2024-11-20 10:44:53.349336] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.689 [2024-11-20 10:44:53.349486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.689 [2024-11-20 10:44:53.349504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.689 [2024-11-20 10:44:53.353479] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.689 [2024-11-20 10:44:53.353626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.689 [2024-11-20 10:44:53.353645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.689 [2024-11-20 10:44:53.357291] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.689 [2024-11-20 10:44:53.357445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.689 [2024-11-20 10:44:53.357463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.689 [2024-11-20 10:44:53.361065] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.689 [2024-11-20 10:44:53.361235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.689 [2024-11-20 10:44:53.361254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.689 [2024-11-20 10:44:53.364788] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.689 [2024-11-20 10:44:53.364937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.689 [2024-11-20 10:44:53.364956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.689 [2024-11-20 10:44:53.368681] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.689 [2024-11-20 10:44:53.368852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.689 [2024-11-20 10:44:53.368870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.689 [2024-11-20 10:44:53.373320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.689 [2024-11-20 10:44:53.373452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.689 [2024-11-20 10:44:53.373470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.689 [2024-11-20 10:44:53.377492] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.689 [2024-11-20 10:44:53.377640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.689 [2024-11-20 10:44:53.377660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.689 [2024-11-20 10:44:53.381406] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.689 [2024-11-20 10:44:53.381551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.690 [2024-11-20 10:44:53.381569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.690 [2024-11-20 10:44:53.385488] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.690 [2024-11-20 10:44:53.385640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.690 [2024-11-20 10:44:53.385659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.690 [2024-11-20 10:44:53.389382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.690 [2024-11-20 10:44:53.389530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.690 [2024-11-20 10:44:53.389548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.690 [2024-11-20 10:44:53.393320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.690 [2024-11-20 10:44:53.393476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.690 [2024-11-20 10:44:53.393494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.690 [2024-11-20 10:44:53.397248] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.690 [2024-11-20 10:44:53.397406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13376 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.690 [2024-11-20 10:44:53.397425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.690 [2024-11-20 10:44:53.401070] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.690 [2024-11-20 10:44:53.401226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.690 [2024-11-20 10:44:53.401246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.690 [2024-11-20 10:44:53.404923] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.690 [2024-11-20 10:44:53.405061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.690 [2024-11-20 10:44:53.405079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.690 [2024-11-20 10:44:53.408828] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.690 [2024-11-20 10:44:53.409003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.690 [2024-11-20 10:44:53.409022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.412761] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.412918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.412935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.416602] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.416744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.416763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.420423] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.420577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.420596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.424357] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.424510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.424528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.428245] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.428411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.428429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.432077] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.432255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.432274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.435821] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.435971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.435990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.439607] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.439772] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.439790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.443427] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.443613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.443631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.447224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.447399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.447421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.451005] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.451159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.451178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.454917] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.455091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.455109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.459217] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.459354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.459373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.463698] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.463867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.463885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.467557] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.467715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.467734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.471387] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.471558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.471576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.475311] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.475455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:11744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.475474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.479113] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.479288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.479306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.482949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.483117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.483136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.487589] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.487734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.487752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.492007] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.492170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.492188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.495957] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.496102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:20736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.496120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.499851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.950 [2024-11-20 10:44:53.499993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.950 [2024-11-20 10:44:53.500011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.950 [2024-11-20 10:44:53.503699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.503827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:23392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.503845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.507616] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.507766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.507785] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.511508] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.511645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.511663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.515320] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.515488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.515506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.519132] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.519291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.519309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.522851] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.523005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:15648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.523023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.526617] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.526753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.526772] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.530493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.530645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.530664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.535029] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.535170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.535189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.539241] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.539388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.539406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.543074] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.543250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:19232 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.543268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.546931] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.547087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:4544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.547105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.550862] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.551010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.551033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.554713] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.554853] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.554871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.559465] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.559611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.559629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.563268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.563427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:7520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.563445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.567010] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.567154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.567172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.570722] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.570887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.570904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.574458] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.574604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.574623] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.578224] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.578381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.578399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.582019] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.582172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:13152 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.582190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.586400] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.586539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.586557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.590956] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.591103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.591121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.594765] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.594906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.594925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.598625] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.598782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.598799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.602715] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.602872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.602891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.606529] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.951 [2024-11-20 10:44:53.606689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.951 [2024-11-20 10:44:53.606707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.951 [2024-11-20 10:44:53.610268] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.952 [2024-11-20 10:44:53.610432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.952 [2024-11-20 10:44:53.610450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.952 [2024-11-20 10:44:53.614044] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.952 [2024-11-20 10:44:53.614199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.952 [2024-11-20 10:44:53.614225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.952 [2024-11-20 10:44:53.618156] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.952 [2024-11-20 10:44:53.618323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.952 [2024-11-20 10:44:53.618342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.952 [2024-11-20 10:44:53.622781] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.952 [2024-11-20 10:44:53.622910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.952 [2024-11-20 10:44:53.622929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.952 [2024-11-20 10:44:53.626808] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.952 [2024-11-20 10:44:53.626952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.952 [2024-11-20 10:44:53.626970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.952 [2024-11-20 10:44:53.630736] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.952 [2024-11-20 10:44:53.630884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.952 [2024-11-20 10:44:53.630903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.952 [2024-11-20 10:44:53.634612] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.952 [2024-11-20 10:44:53.634764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.952 [2024-11-20 10:44:53.634782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.952 [2024-11-20 10:44:53.638443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.952 [2024-11-20 10:44:53.638586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.952 [2024-11-20 10:44:53.638604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.952 [2024-11-20 10:44:53.642262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.952 [2024-11-20 10:44:53.642402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.952 [2024-11-20 10:44:53.642420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.952 [2024-11-20 10:44:53.646249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.952 [2024-11-20 10:44:53.646401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.952 [2024-11-20 10:44:53.646419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.952 [2024-11-20 10:44:53.650103] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.952 [2024-11-20 10:44:53.650267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.952 [2024-11-20 10:44:53.650285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.952 [2024-11-20 10:44:53.654511] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.952 [2024-11-20 10:44:53.654668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.952 [2024-11-20 10:44:53.654690] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.952 [2024-11-20 10:44:53.659285] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.952 [2024-11-20 10:44:53.659396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.952 [2024-11-20 10:44:53.659414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:12.952 [2024-11-20 10:44:53.663738] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.952 [2024-11-20 10:44:53.663873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.952 [2024-11-20 10:44:53.663891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:12.952 [2024-11-20 10:44:53.667664] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.952 [2024-11-20 10:44:53.667812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.952 [2024-11-20 10:44:53.667830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:12.952 [2024-11-20 10:44:53.671481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.952 [2024-11-20 10:44:53.671632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.952 [2024-11-20 10:44:53.671651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:12.952 [2024-11-20 10:44:53.675439] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:12.952 [2024-11-20 10:44:53.675595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:12.952 [2024-11-20 10:44:53.675614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.212 [2024-11-20 10:44:53.679768] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.212 [2024-11-20 10:44:53.679926] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.212 [2024-11-20 10:44:53.679944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.212 [2024-11-20 10:44:53.684262] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.212 [2024-11-20 10:44:53.684366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:25440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.212 [2024-11-20 10:44:53.684385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.212 [2024-11-20 10:44:53.688404] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.212 [2024-11-20 10:44:53.688547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.212 [2024-11-20 10:44:53.688582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.212 [2024-11-20 10:44:53.692307] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.212 [2024-11-20 10:44:53.692437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.212 [2024-11-20 10:44:53.692456] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.212 [2024-11-20 10:44:53.696443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.212 [2024-11-20 10:44:53.696617] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:10944 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.212 [2024-11-20 10:44:53.696636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.212 [2024-11-20 10:44:53.700385] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.212 [2024-11-20 10:44:53.700555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:12256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.212 [2024-11-20 10:44:53.700575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.212 [2024-11-20 10:44:53.704367] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.212 [2024-11-20 10:44:53.704506] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.212 [2024-11-20 10:44:53.704526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.212 [2024-11-20 10:44:53.708362] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.212 [2024-11-20 10:44:53.708508] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21600 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.212 [2024-11-20 10:44:53.708528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.212 [2024-11-20 10:44:53.712298] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.212 [2024-11-20 10:44:53.712445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.212 [2024-11-20 10:44:53.712464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.212 [2024-11-20 10:44:53.716481] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.212 [2024-11-20 10:44:53.716638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.212 [2024-11-20 10:44:53.716657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.212 [2024-11-20 10:44:53.720510] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.212 [2024-11-20 10:44:53.720667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.212 [2024-11-20 10:44:53.720687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.212 [2024-11-20 10:44:53.724443] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.212 [2024-11-20 10:44:53.724612] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15904 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.212 [2024-11-20 10:44:53.724630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.212 [2024-11-20 10:44:53.728186] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.212 [2024-11-20 10:44:53.728363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:4352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.212 [2024-11-20 10:44:53.728382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.212 [2024-11-20 10:44:53.732034] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.212 [2024-11-20 10:44:53.732179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.212 [2024-11-20 10:44:53.732196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.212 [2024-11-20 10:44:53.736094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.212 [2024-11-20 10:44:53.736240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.212 [2024-11-20 10:44:53.736258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.212 [2024-11-20 10:44:53.740783] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.212 [2024-11-20 10:44:53.740958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.212 [2024-11-20 10:44:53.740976] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.212 [2024-11-20 10:44:53.744699] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.212 [2024-11-20 10:44:53.744851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2880 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.212 [2024-11-20 10:44:53.744869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.212 [2024-11-20 10:44:53.748628] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.212 [2024-11-20 10:44:53.748791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.212 [2024-11-20 10:44:53.748810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.752493] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.752656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:23264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.752674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.756418] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.756567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.756585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.760378] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.760532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.760553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.764282] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.764433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.764451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.768278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.768426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.768444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.772255] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.772405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.772423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.776104] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.776268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.776286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.779971] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.780125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16224 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.780143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.783870] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.784014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2592 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.784033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.787731] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.787890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.787908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.791716] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.791870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:18560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.791888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.795693] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.795845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.795863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.799546] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.799697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.799714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.803351] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.803502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.803520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.807398] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.807541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.807560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.811827] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.811965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:13760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.811983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.815898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.816061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:15520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.816079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.819866] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.820019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.820037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.823689] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.823845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.823863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.827490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.827659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.827676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.831382] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.831525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.831543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.835094] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.835243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.835262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.839061] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.839220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8512 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.839238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.842766] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.842929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.842947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.213 [2024-11-20 10:44:53.846494] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.213 [2024-11-20 10:44:53.846667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.213 [2024-11-20 10:44:53.846685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.850200] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.850364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:14464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.850382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.853949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.854102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.854120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.857686] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.857836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.857854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.861372] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.861534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.861556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.865082] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.865262] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.865280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.868795] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.868967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:20000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.868986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.872490] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.872645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.872663] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.876196] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.876368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.876386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.879898] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.880052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.880070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.883584] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.883745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:11712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.883764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.887278] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.887440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:3328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.887458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.890943] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.891107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.891125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.894627] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.894791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:5440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.894809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.898457] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.898630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.898647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.902949] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.903063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:9440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.903081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.907415] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.907587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:24896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.907606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.911354] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.911505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:8960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.911523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.915249] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.915396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.915414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.919197] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.919353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.919370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:27:13.214 [2024-11-20 10:44:53.923099] tcp.c:2233:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x15c7980) with pdu=0x2000166ff3c8 00:27:13.214 [2024-11-20 10:44:53.923254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:2048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:13.214 [2024-11-20 10:44:53.923273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:0 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:27:13.214 7257.00 IOPS, 907.12 MiB/s 00:27:13.214 Latency(us) 00:27:13.214 [2024-11-20T09:44:53.945Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.214 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:27:13.214 nvme0n1 : 2.00 7255.52 906.94 0.00 0.00 2201.47 1466.76 7084.13 00:27:13.214 [2024-11-20T09:44:53.945Z] =================================================================================================================== 00:27:13.214 [2024-11-20T09:44:53.945Z] Total : 7255.52 906.94 0.00 0.00 2201.47 1466.76 7084.13 00:27:13.214 { 00:27:13.214 "results": [ 00:27:13.214 { 00:27:13.214 "job": "nvme0n1", 00:27:13.214 "core_mask": "0x2", 00:27:13.214 "workload": "randwrite", 00:27:13.214 "status": "finished", 00:27:13.214 "queue_depth": 16, 00:27:13.214 "io_size": 131072, 00:27:13.214 "runtime": 2.003165, 00:27:13.214 "iops": 7255.518142539431, 00:27:13.214 "mibps": 906.9397678174289, 00:27:13.214 "io_failed": 0, 00:27:13.214 "io_timeout": 0, 00:27:13.214 "avg_latency_us": 2201.4662687819036, 00:27:13.214 "min_latency_us": 1466.7580952380952, 00:27:13.214 "max_latency_us": 7084.129523809524 00:27:13.214 } 00:27:13.214 ], 00:27:13.214 "core_count": 1 00:27:13.214 } 00:27:13.472 10:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:27:13.472 10:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:27:13.472 10:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:27:13.472 | .driver_specific 00:27:13.472 | .nvme_error 00:27:13.472 | .status_code 00:27:13.472 | .command_transient_transport_error' 00:27:13.472 10:44:53 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:27:13.472 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 469 > 0 )) 00:27:13.472 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 3373191 00:27:13.472 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3373191 ']' 00:27:13.472 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3373191 00:27:13.472 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:13.472 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:13.472 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3373191 00:27:13.731 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:13.731 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:13.731 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3373191' 00:27:13.731 killing process with pid 3373191 00:27:13.731 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3373191 00:27:13.731 Received shutdown signal, test time was about 2.000000 seconds 00:27:13.731 00:27:13.731 Latency(us) 00:27:13.731 [2024-11-20T09:44:54.462Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:13.731 [2024-11-20T09:44:54.462Z] =================================================================================================================== 00:27:13.731 [2024-11-20T09:44:54.462Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:27:13.731 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3373191 00:27:13.731 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 3371531 00:27:13.731 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # '[' -z 3371531 ']' 00:27:13.731 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # kill -0 3371531 00:27:13.731 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # uname 00:27:13.731 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:13.731 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3371531 00:27:13.731 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:13.731 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:13.731 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3371531' 00:27:13.731 killing process with pid 3371531 00:27:13.731 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@973 -- # kill 3371531 00:27:13.731 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@978 -- # wait 3371531 00:27:13.991 00:27:13.991 real 0m13.901s 00:27:13.991 user 0m26.436s 00:27:13.991 sys 0m4.683s 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:27:13.991 ************************************ 00:27:13.991 END TEST nvmf_digest_error 00:27:13.991 ************************************ 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@99 -- # sync 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@102 -- # set +e 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:13.991 rmmod nvme_tcp 00:27:13.991 rmmod nvme_fabrics 00:27:13.991 rmmod nvme_keyring 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@106 -- # set -e 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@107 -- # return 0 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@336 -- # '[' -n 3371531 ']' 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@337 -- # killprocess 3371531 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@954 -- # '[' -z 3371531 ']' 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@958 -- # kill -0 3371531 00:27:13.991 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3371531) - No such process 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@981 -- # echo 'Process with pid 3371531 is not found' 00:27:13.991 Process with pid 3371531 is not found 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # nvmf_fini 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@264 -- # local dev 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@267 -- # remove_target_ns 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:13.991 10:44:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@268 -- # delete_main_bridge 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@130 -- # return 0 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # _dev=0 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@41 -- # dev_map=() 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/setup.sh@284 -- # iptr 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@542 -- # iptables-save 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@542 -- # iptables-restore 00:27:16.525 00:27:16.525 real 0m36.583s 00:27:16.525 user 0m55.294s 00:27:16.525 sys 0m13.971s 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:27:16.525 ************************************ 00:27:16.525 END TEST nvmf_digest 00:27:16.525 ************************************ 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:16.525 ************************************ 00:27:16.525 START TEST nvmf_host_discovery 00:27:16.525 ************************************ 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:27:16.525 * Looking for test storage... 00:27:16.525 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lcov --version 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # IFS=.-: 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@336 -- # read -ra ver1 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # IFS=.-: 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@337 -- # read -ra ver2 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@338 -- # local 'op=<' 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@340 -- # ver1_l=2 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@341 -- # ver2_l=1 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@344 -- # case "$op" in 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@345 -- # : 1 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:16.525 10:44:56 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # decimal 1 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=1 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 1 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@365 -- # ver1[v]=1 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # decimal 2 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@353 -- # local d=2 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@355 -- # echo 2 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@366 -- # ver2[v]=2 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@368 -- # return 0 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:16.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.525 --rc genhtml_branch_coverage=1 00:27:16.525 --rc genhtml_function_coverage=1 00:27:16.525 --rc genhtml_legend=1 00:27:16.525 --rc geninfo_all_blocks=1 00:27:16.525 --rc geninfo_unexecuted_blocks=1 00:27:16.525 00:27:16.525 ' 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:16.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.525 --rc genhtml_branch_coverage=1 00:27:16.525 --rc genhtml_function_coverage=1 00:27:16.525 --rc genhtml_legend=1 00:27:16.525 --rc geninfo_all_blocks=1 00:27:16.525 --rc geninfo_unexecuted_blocks=1 00:27:16.525 00:27:16.525 ' 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:16.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.525 --rc genhtml_branch_coverage=1 00:27:16.525 --rc genhtml_function_coverage=1 00:27:16.525 --rc genhtml_legend=1 00:27:16.525 --rc geninfo_all_blocks=1 00:27:16.525 --rc geninfo_unexecuted_blocks=1 00:27:16.525 00:27:16.525 ' 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:16.525 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:16.525 --rc genhtml_branch_coverage=1 00:27:16.525 --rc genhtml_function_coverage=1 00:27:16.525 --rc genhtml_legend=1 00:27:16.525 --rc geninfo_all_blocks=1 00:27:16.525 --rc geninfo_unexecuted_blocks=1 00:27:16.525 00:27:16.525 ' 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:16.525 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@15 -- # shopt -s extglob 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@50 -- # : 0 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:16.526 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # DISCOVERY_PORT=8009 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@12 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@15 -- # NQN=nqn.2016-06.io.spdk:cnode 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@18 -- # HOST_SOCK=/tmp/host.sock 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # nvmftestinit 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # remove_target_ns 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # xtrace_disable 00:27:16.526 10:44:57 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@131 -- # pci_devs=() 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@135 -- # net_devs=() 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@136 -- # e810=() 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@136 -- # local -ga e810 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@137 -- # x722=() 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@137 -- # local -ga x722 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@138 -- # mlx=() 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@138 -- # local -ga mlx 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:23.092 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:23.093 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:23.093 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:23.093 Found net devices under 0000:86:00.0: cvl_0_0 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:23.093 Found net devices under 0000:86:00.1: cvl_0_1 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@262 -- # is_hw=yes 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@257 -- # create_target_ns 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@28 -- # local -g _dev 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # ips=() 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772161 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:23.093 10.0.0.1 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@11 -- # local val=167772162 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:23.093 10.0.0.2 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:27:23.093 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:23.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:23.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.461 ms 00:27:23.094 00:27:23.094 --- 10.0.0.1 ping statistics --- 00:27:23.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.094 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:23.094 10:45:02 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:27:23.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:23.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:27:23.094 00:27:23.094 --- 10.0.0.2 ping statistics --- 00:27:23.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:23.094 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # (( pair++ )) 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # return 0 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=initiator1 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # return 1 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev= 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@169 -- # return 0 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:27:23.094 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=target0 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # get_net_dev target1 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@107 -- # local dev=target1 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@109 -- # return 1 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@168 -- # dev= 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@169 -- # return 0 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmfappstart -m 0x2 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@328 -- # nvmfpid=3377438 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # waitforlisten 3377438 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3377438 ']' 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:23.095 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.095 [2024-11-20 10:45:03.184009] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:27:23.095 [2024-11-20 10:45:03.184058] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:23.095 [2024-11-20 10:45:03.265245] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.095 [2024-11-20 10:45:03.306002] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:23.095 [2024-11-20 10:45:03.306036] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:23.095 [2024-11-20 10:45:03.306044] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:23.095 [2024-11-20 10:45:03.306050] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:23.095 [2024-11-20 10:45:03.306055] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:23.095 [2024-11-20 10:45:03.306638] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.095 [2024-11-20 10:45:03.446032] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.095 [2024-11-20 10:45:03.458216] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # rpc_cmd bdev_null_create null0 1000 512 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.095 null0 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@31 -- # rpc_cmd bdev_null_create null1 1000 512 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.095 null1 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd bdev_wait_for_examine 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@40 -- # hostpid=3377580 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@41 -- # waitforlisten 3377580 /tmp/host.sock 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@835 -- # '[' -z 3377580 ']' 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:23.095 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.095 [2024-11-20 10:45:03.537372] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:27:23.095 [2024-11-20 10:45:03.537415] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3377580 ] 00:27:23.095 [2024-11-20 10:45:03.611021] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:23.095 [2024-11-20 10:45:03.653370] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@868 -- # return 0 00:27:23.095 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@43 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:23.096 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:27:23.096 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.096 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.096 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.096 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:27:23.096 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.096 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.096 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.096 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # notify_id=0 00:27:23.096 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@78 -- # get_subsystem_names 00:27:23.096 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:23.096 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.096 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.096 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:27:23.096 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:27:23.096 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:27:23.096 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.096 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@78 -- # [[ '' == '' ]] 00:27:23.096 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # get_bdev_list 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # [[ '' == '' ]] 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@81 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@82 -- # get_subsystem_names 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@82 -- # [[ '' == '' ]] 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_bdev_list 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@85 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # get_subsystem_names 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:27:23.354 10:45:03 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.354 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # [[ '' == '' ]] 00:27:23.354 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_bdev_list 00:27:23.354 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.354 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:27:23.354 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:27:23.354 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.354 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:27:23.354 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.354 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.354 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:27:23.354 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:23.354 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.354 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.354 [2024-11-20 10:45:04.075790] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:23.354 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.612 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_subsystem_names 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@93 -- # get_bdev_list 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@93 -- # [[ '' == '' ]] 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@94 -- # is_notification_count_eq 0 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=0 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=0 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=0 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@100 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == \n\v\m\e\0 ]] 00:27:23.613 10:45:04 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:24.178 [2024-11-20 10:45:04.814359] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:24.178 [2024-11-20 10:45:04.814378] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:24.178 [2024-11-20 10:45:04.814392] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:24.178 [2024-11-20 10:45:04.902648] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:24.436 [2024-11-20 10:45:04.963313] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:24.436 [2024-11-20 10:45:04.964091] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x1601dd0:1 started. 00:27:24.436 [2024-11-20 10:45:04.965450] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:24.436 [2024-11-20 10:45:04.965466] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:24.436 [2024-11-20 10:45:04.972740] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x1601dd0 was disconnected and freed. delete nvme_qpair. 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@101 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@102 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:27:24.693 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0 ]] 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # is_notification_count_eq 1 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=1 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=1 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=1 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:24.951 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:24.952 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:24.952 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:24.952 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:24.952 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:27:24.952 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:24.952 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:27:24.952 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:24.952 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:27:25.210 [2024-11-20 10:45:05.692870] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x160ef90:1 started. 00:27:25.210 [2024-11-20 10:45:05.695266] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x160ef90 was disconnected and freed. delete nvme_qpair. 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@109 -- # is_notification_count_eq 1 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=1 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=1 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=2 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.210 [2024-11-20 10:45:05.776374] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:27:25.210 [2024-11-20 10:45:05.777166] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:25.210 [2024-11-20 10:45:05.777185] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@115 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@116 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:27:25.210 [2024-11-20 10:45:05.863764] bdev_nvme.c:7402:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:25.210 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@117 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:25.211 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:27:25.211 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:25.211 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:25.211 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:25.211 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:25.211 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:27:25.211 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:25.211 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:25.211 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:25.211 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:25.211 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:27:25.211 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:25.211 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:27:25.211 10:45:05 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:25.468 [2024-11-20 10:45:06.129954] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4421 00:27:25.468 [2024-11-20 10:45:06.129991] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:25.468 [2024-11-20 10:45:06.129999] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:25.468 [2024-11-20 10:45:06.130004] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # is_notification_count_eq 0 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=0 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.404 10:45:06 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.404 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=0 00:27:26.404 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=2 00:27:26.404 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:26.404 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:26.404 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:27:26.404 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.404 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.404 [2024-11-20 10:45:07.032625] bdev_nvme.c:7460:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:27:26.404 [2024-11-20 10:45:07.032647] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:26.404 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.404 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@124 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:26.404 [2024-11-20 10:45:07.037243] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.404 [2024-11-20 10:45:07.037261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.404 [2024-11-20 10:45:07.037271] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.404 [2024-11-20 10:45:07.037282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.404 [2024-11-20 10:45:07.037289] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.404 [2024-11-20 10:45:07.037295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.404 [2024-11-20 10:45:07.037302] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:26.404 [2024-11-20 10:45:07.037309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:26.404 [2024-11-20 10:45:07.037315] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d2390 is same with the state(6) to be set 00:27:26.404 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:27:26.404 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:26.404 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:26.404 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:27:26.404 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:26.404 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:26.404 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:27:26.404 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.404 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:27:26.404 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.405 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:27:26.405 [2024-11-20 10:45:07.047254] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d2390 (9): Bad file descriptor 00:27:26.405 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.405 [2024-11-20 10:45:07.057289] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:26.405 [2024-11-20 10:45:07.057303] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:26.405 [2024-11-20 10:45:07.057310] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:26.405 [2024-11-20 10:45:07.057319] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:26.405 [2024-11-20 10:45:07.057338] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:26.405 [2024-11-20 10:45:07.057606] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-11-20 10:45:07.057620] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d2390 with addr=10.0.0.2, port=4420 00:27:26.405 [2024-11-20 10:45:07.057629] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d2390 is same with the state(6) to be set 00:27:26.405 [2024-11-20 10:45:07.057641] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d2390 (9): Bad file descriptor 00:27:26.405 [2024-11-20 10:45:07.057657] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:26.405 [2024-11-20 10:45:07.057665] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:26.405 [2024-11-20 10:45:07.057673] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:26.405 [2024-11-20 10:45:07.057684] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:26.405 [2024-11-20 10:45:07.057689] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:26.405 [2024-11-20 10:45:07.057693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:26.405 [2024-11-20 10:45:07.067369] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:26.405 [2024-11-20 10:45:07.067382] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:26.405 [2024-11-20 10:45:07.067387] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:26.405 [2024-11-20 10:45:07.067391] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:26.405 [2024-11-20 10:45:07.067405] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:26.405 [2024-11-20 10:45:07.067525] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-11-20 10:45:07.067539] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d2390 with addr=10.0.0.2, port=4420 00:27:26.405 [2024-11-20 10:45:07.067547] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d2390 is same with the state(6) to be set 00:27:26.405 [2024-11-20 10:45:07.067558] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d2390 (9): Bad file descriptor 00:27:26.405 [2024-11-20 10:45:07.067568] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:26.405 [2024-11-20 10:45:07.067575] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:26.405 [2024-11-20 10:45:07.067582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:26.405 [2024-11-20 10:45:07.067588] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:26.405 [2024-11-20 10:45:07.067592] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:26.405 [2024-11-20 10:45:07.067596] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:26.405 [2024-11-20 10:45:07.077437] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:26.405 [2024-11-20 10:45:07.077449] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:26.405 [2024-11-20 10:45:07.077454] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:26.405 [2024-11-20 10:45:07.077458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:26.405 [2024-11-20 10:45:07.077471] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:26.405 [2024-11-20 10:45:07.077624] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-11-20 10:45:07.077637] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d2390 with addr=10.0.0.2, port=4420 00:27:26.405 [2024-11-20 10:45:07.077644] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d2390 is same with the state(6) to be set 00:27:26.405 [2024-11-20 10:45:07.077654] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d2390 (9): Bad file descriptor 00:27:26.405 [2024-11-20 10:45:07.077663] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:26.405 [2024-11-20 10:45:07.077669] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:26.405 [2024-11-20 10:45:07.077679] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:26.405 [2024-11-20 10:45:07.077685] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:26.405 [2024-11-20 10:45:07.077689] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:26.405 [2024-11-20 10:45:07.077693] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:26.405 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:27:26.405 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:26.405 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@125 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:26.405 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:27:26.405 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:26.405 [2024-11-20 10:45:07.087502] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:26.405 [2024-11-20 10:45:07.087515] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:26.405 [2024-11-20 10:45:07.087519] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:26.405 [2024-11-20 10:45:07.087523] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:26.405 [2024-11-20 10:45:07.087535] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:26.405 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:26.405 [2024-11-20 10:45:07.087664] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-11-20 10:45:07.087678] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d2390 with addr=10.0.0.2, port=4420 00:27:26.405 [2024-11-20 10:45:07.087685] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d2390 is same with the state(6) to be set 00:27:26.405 [2024-11-20 10:45:07.087695] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d2390 (9): Bad file descriptor 00:27:26.405 [2024-11-20 10:45:07.087704] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:26.405 [2024-11-20 10:45:07.087710] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:26.405 [2024-11-20 10:45:07.087716] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:26.405 [2024-11-20 10:45:07.087721] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:26.405 [2024-11-20 10:45:07.087726] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:26.405 [2024-11-20 10:45:07.087729] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:26.405 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:27:26.405 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:26.405 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:26.405 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:27:26.405 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:27:26.405 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:27:26.405 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.405 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.405 [2024-11-20 10:45:07.097567] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:26.405 [2024-11-20 10:45:07.097582] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:26.405 [2024-11-20 10:45:07.097586] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:26.405 [2024-11-20 10:45:07.097590] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:26.405 [2024-11-20 10:45:07.097604] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:26.405 [2024-11-20 10:45:07.097807] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.405 [2024-11-20 10:45:07.097820] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d2390 with addr=10.0.0.2, port=4420 00:27:26.405 [2024-11-20 10:45:07.097828] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d2390 is same with the state(6) to be set 00:27:26.405 [2024-11-20 10:45:07.097838] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d2390 (9): Bad file descriptor 00:27:26.405 [2024-11-20 10:45:07.097864] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:26.405 [2024-11-20 10:45:07.097871] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:26.405 [2024-11-20 10:45:07.097878] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:26.405 [2024-11-20 10:45:07.097884] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:26.405 [2024-11-20 10:45:07.097889] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:26.405 [2024-11-20 10:45:07.097893] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:26.406 [2024-11-20 10:45:07.107635] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:26.406 [2024-11-20 10:45:07.107646] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:26.406 [2024-11-20 10:45:07.107650] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:26.406 [2024-11-20 10:45:07.107654] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:26.406 [2024-11-20 10:45:07.107668] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:26.406 [2024-11-20 10:45:07.107905] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-11-20 10:45:07.107917] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d2390 with addr=10.0.0.2, port=4420 00:27:26.406 [2024-11-20 10:45:07.107924] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d2390 is same with the state(6) to be set 00:27:26.406 [2024-11-20 10:45:07.107934] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d2390 (9): Bad file descriptor 00:27:26.406 [2024-11-20 10:45:07.107943] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:26.406 [2024-11-20 10:45:07.107950] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:26.406 [2024-11-20 10:45:07.107957] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:26.406 [2024-11-20 10:45:07.107962] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:26.406 [2024-11-20 10:45:07.107966] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:26.406 [2024-11-20 10:45:07.107973] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:26.406 [2024-11-20 10:45:07.117698] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:26.406 [2024-11-20 10:45:07.117711] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:26.406 [2024-11-20 10:45:07.117715] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:26.406 [2024-11-20 10:45:07.117719] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:26.406 [2024-11-20 10:45:07.117732] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:26.406 [2024-11-20 10:45:07.117958] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-11-20 10:45:07.117971] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d2390 with addr=10.0.0.2, port=4420 00:27:26.406 [2024-11-20 10:45:07.117978] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d2390 is same with the state(6) to be set 00:27:26.406 [2024-11-20 10:45:07.117989] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d2390 (9): Bad file descriptor 00:27:26.406 [2024-11-20 10:45:07.118004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:26.406 [2024-11-20 10:45:07.118011] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:26.406 [2024-11-20 10:45:07.118017] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:26.406 [2024-11-20 10:45:07.118023] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:26.406 [2024-11-20 10:45:07.118027] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:26.406 [2024-11-20 10:45:07.118031] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:26.406 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.406 [2024-11-20 10:45:07.127762] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:26.406 [2024-11-20 10:45:07.127773] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:26.406 [2024-11-20 10:45:07.127777] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:26.406 [2024-11-20 10:45:07.127781] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:26.406 [2024-11-20 10:45:07.127793] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:26.406 [2024-11-20 10:45:07.127886] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.406 [2024-11-20 10:45:07.127897] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d2390 with addr=10.0.0.2, port=4420 00:27:26.406 [2024-11-20 10:45:07.127904] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d2390 is same with the state(6) to be set 00:27:26.406 [2024-11-20 10:45:07.127915] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d2390 (9): Bad file descriptor 00:27:26.406 [2024-11-20 10:45:07.127926] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:26.406 [2024-11-20 10:45:07.127934] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:26.406 [2024-11-20 10:45:07.127941] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:26.406 [2024-11-20 10:45:07.127955] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:26.406 [2024-11-20 10:45:07.127960] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:26.406 [2024-11-20 10:45:07.127963] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:26.406 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:26.665 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:26.665 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@126 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:26.665 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:27:26.665 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:26.665 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:26.665 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:26.665 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:26.665 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:26.665 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:26.665 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:26.665 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:27:26.665 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:26.665 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:27:26.665 [2024-11-20 10:45:07.137824] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:26.665 [2024-11-20 10:45:07.137835] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:26.665 [2024-11-20 10:45:07.137839] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:26.665 [2024-11-20 10:45:07.137843] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:26.665 [2024-11-20 10:45:07.137854] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:26.665 [2024-11-20 10:45:07.137965] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-11-20 10:45:07.137978] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d2390 with addr=10.0.0.2, port=4420 00:27:26.665 [2024-11-20 10:45:07.137985] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d2390 is same with the state(6) to be set 00:27:26.665 [2024-11-20 10:45:07.137995] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d2390 (9): Bad file descriptor 00:27:26.665 [2024-11-20 10:45:07.138004] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:26.665 [2024-11-20 10:45:07.138010] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:26.665 [2024-11-20 10:45:07.138016] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:26.665 [2024-11-20 10:45:07.138022] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:26.665 [2024-11-20 10:45:07.138026] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:26.665 [2024-11-20 10:45:07.138032] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:26.665 [2024-11-20 10:45:07.147886] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:26.665 [2024-11-20 10:45:07.147898] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:26.665 [2024-11-20 10:45:07.147903] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:26.665 [2024-11-20 10:45:07.147907] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:26.665 [2024-11-20 10:45:07.147919] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:26.665 [2024-11-20 10:45:07.148076] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-11-20 10:45:07.148088] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d2390 with addr=10.0.0.2, port=4420 00:27:26.665 [2024-11-20 10:45:07.148095] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d2390 is same with the state(6) to be set 00:27:26.665 [2024-11-20 10:45:07.148106] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d2390 (9): Bad file descriptor 00:27:26.665 [2024-11-20 10:45:07.148115] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:26.665 [2024-11-20 10:45:07.148121] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:26.665 [2024-11-20 10:45:07.148127] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:26.665 [2024-11-20 10:45:07.148132] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:26.665 [2024-11-20 10:45:07.148137] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:26.665 [2024-11-20 10:45:07.148140] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:26.665 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:26.665 [2024-11-20 10:45:07.157950] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:26.665 [2024-11-20 10:45:07.157961] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:26.665 [2024-11-20 10:45:07.157965] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:26.665 [2024-11-20 10:45:07.157969] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:26.665 [2024-11-20 10:45:07.157981] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:26.665 [2024-11-20 10:45:07.158131] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.665 [2024-11-20 10:45:07.158142] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x15d2390 with addr=10.0.0.2, port=4420 00:27:26.665 [2024-11-20 10:45:07.158149] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x15d2390 is same with the state(6) to be set 00:27:26.665 [2024-11-20 10:45:07.158158] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x15d2390 (9): Bad file descriptor 00:27:26.665 [2024-11-20 10:45:07.158172] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:26.665 [2024-11-20 10:45:07.158178] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:26.665 [2024-11-20 10:45:07.158184] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:26.665 [2024-11-20 10:45:07.158190] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:26.665 [2024-11-20 10:45:07.158197] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:26.665 [2024-11-20 10:45:07.158207] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:26.665 [2024-11-20 10:45:07.158598] bdev_nvme.c:7265:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:27:26.665 [2024-11-20 10:45:07.158611] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:26.665 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4420 4421 == \4\4\2\1 ]] 00:27:26.665 10:45:07 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@924 -- # sleep 1 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_paths nvme0 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # sort -n 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@58 -- # xargs 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ 4421 == \4\4\2\1 ]] 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # is_notification_count_eq 0 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=0 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=0 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=2 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:27.600 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:27:27.601 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_subsystem_names 00:27:27.601 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:27:27.601 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # jq -r '.[].name' 00:27:27.601 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.601 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # sort 00:27:27.601 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.601 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@54 -- # xargs 00:27:27.601 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_bdev_list 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # [[ '' == '' ]] 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@133 -- # is_notification_count_eq 2 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # expected_count=2 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@919 -- # local max=10 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@920 -- # (( max-- )) 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # get_notification_count 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # jq '. | length' 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@69 -- # notification_count=2 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@70 -- # notify_id=4 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@921 -- # (( notification_count == expected_count )) 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@922 -- # return 0 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:27.859 10:45:08 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:28.793 [2024-11-20 10:45:09.496663] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:28.793 [2024-11-20 10:45:09.496679] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:28.793 [2024-11-20 10:45:09.496690] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:29.050 [2024-11-20 10:45:09.584954] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:27:29.309 [2024-11-20 10:45:09.896392] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] ctrlr was created to 10.0.0.2:4421 00:27:29.309 [2024-11-20 10:45:09.897031] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] Connecting qpair 0x15cfba0:1 started. 00:27:29.309 [2024-11-20 10:45:09.898581] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:29.309 [2024-11-20 10:45:09.898605] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.309 [2024-11-20 10:45:09.905516] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 3] qpair 0x15cfba0 was disconnected and freed. delete nvme_qpair. 00:27:29.309 request: 00:27:29.309 { 00:27:29.309 "name": "nvme", 00:27:29.309 "trtype": "tcp", 00:27:29.309 "traddr": "10.0.0.2", 00:27:29.309 "adrfam": "ipv4", 00:27:29.309 "trsvcid": "8009", 00:27:29.309 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:29.309 "wait_for_attach": true, 00:27:29.309 "method": "bdev_nvme_start_discovery", 00:27:29.309 "req_id": 1 00:27:29.309 } 00:27:29.309 Got JSON-RPC error response 00:27:29.309 response: 00:27:29.309 { 00:27:29.309 "code": -17, 00:27:29.309 "message": "File exists" 00:27:29.309 } 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@140 -- # get_discovery_ctrlrs 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # jq -r '.[].name' 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # sort 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # xargs 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@140 -- # [[ nvme == \n\v\m\e ]] 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # get_bdev_list 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:27:29.309 10:45:09 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.309 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:29.309 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@144 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:29.309 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:29.309 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:29.309 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:29.309 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.309 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:29.309 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.309 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:27:29.309 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.309 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.309 request: 00:27:29.309 { 00:27:29.309 "name": "nvme_second", 00:27:29.309 "trtype": "tcp", 00:27:29.309 "traddr": "10.0.0.2", 00:27:29.309 "adrfam": "ipv4", 00:27:29.310 "trsvcid": "8009", 00:27:29.310 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:29.310 "wait_for_attach": true, 00:27:29.310 "method": "bdev_nvme_start_discovery", 00:27:29.310 "req_id": 1 00:27:29.310 } 00:27:29.310 Got JSON-RPC error response 00:27:29.310 response: 00:27:29.310 { 00:27:29.310 "code": -17, 00:27:29.310 "message": "File exists" 00:27:29.310 } 00:27:29.310 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:29.310 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:29.310 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:29.310 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:29.310 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_discovery_ctrlrs 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # jq -r '.[].name' 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # sort 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # xargs 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme == \n\v\m\e ]] 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@147 -- # get_bdev_list 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # jq -r '.[].name' 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # sort 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # xargs 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@147 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@150 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@652 -- # local es=0 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:29.568 10:45:10 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:30.501 [2024-11-20 10:45:11.138038] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:30.501 [2024-11-20 10:45:11.138065] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1612180 with addr=10.0.0.2, port=8010 00:27:30.501 [2024-11-20 10:45:11.138083] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:30.501 [2024-11-20 10:45:11.138090] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:30.501 [2024-11-20 10:45:11.138096] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:31.474 [2024-11-20 10:45:12.140472] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:31.474 [2024-11-20 10:45:12.140497] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1612180 with addr=10.0.0.2, port=8010 00:27:31.474 [2024-11-20 10:45:12.140509] nvme_tcp.c:2612:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:31.474 [2024-11-20 10:45:12.140516] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:31.474 [2024-11-20 10:45:12.140522] bdev_nvme.c:7546:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:27:32.513 [2024-11-20 10:45:13.142641] bdev_nvme.c:7521:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:27:32.513 request: 00:27:32.513 { 00:27:32.513 "name": "nvme_second", 00:27:32.513 "trtype": "tcp", 00:27:32.513 "traddr": "10.0.0.2", 00:27:32.513 "adrfam": "ipv4", 00:27:32.513 "trsvcid": "8010", 00:27:32.513 "hostnqn": "nqn.2021-12.io.spdk:test", 00:27:32.513 "wait_for_attach": false, 00:27:32.513 "attach_timeout_ms": 3000, 00:27:32.513 "method": "bdev_nvme_start_discovery", 00:27:32.513 "req_id": 1 00:27:32.513 } 00:27:32.513 Got JSON-RPC error response 00:27:32.513 response: 00:27:32.513 { 00:27:32.513 "code": -110, 00:27:32.513 "message": "Connection timed out" 00:27:32.513 } 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@655 -- # es=1 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_discovery_ctrlrs 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # jq -r '.[].name' 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # sort 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@62 -- # xargs 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme == \n\v\m\e ]] 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@154 -- # trap - SIGINT SIGTERM EXIT 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@156 -- # kill 3377580 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # nvmftestfini 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@99 -- # sync 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@102 -- # set +e 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:32.513 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:32.513 rmmod nvme_tcp 00:27:32.513 rmmod nvme_fabrics 00:27:32.772 rmmod nvme_keyring 00:27:32.772 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:32.772 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@106 -- # set -e 00:27:32.772 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@107 -- # return 0 00:27:32.772 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@336 -- # '[' -n 3377438 ']' 00:27:32.772 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@337 -- # killprocess 3377438 00:27:32.772 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # '[' -z 3377438 ']' 00:27:32.772 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # kill -0 3377438 00:27:32.772 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # uname 00:27:32.772 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:32.772 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3377438 00:27:32.772 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:32.772 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:32.772 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3377438' 00:27:32.772 killing process with pid 3377438 00:27:32.772 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@973 -- # kill 3377438 00:27:32.772 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@978 -- # wait 3377438 00:27:32.772 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:32.772 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # nvmf_fini 00:27:32.772 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@264 -- # local dev 00:27:32.773 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@267 -- # remove_target_ns 00:27:32.773 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:32.773 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:32.773 10:45:13 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@268 -- # delete_main_bridge 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@130 -- # return 0 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # _dev=0 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@41 -- # dev_map=() 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/setup.sh@284 -- # iptr 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@542 -- # iptables-save 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@542 -- # iptables-restore 00:27:35.307 00:27:35.307 real 0m18.723s 00:27:35.307 user 0m23.252s 00:27:35.307 sys 0m6.031s 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:27:35.307 ************************************ 00:27:35.307 END TEST nvmf_host_discovery 00:27:35.307 ************************************ 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:35.307 ************************************ 00:27:35.307 START TEST nvmf_discovery_remove_ifc 00:27:35.307 ************************************ 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:27:35.307 * Looking for test storage... 00:27:35.307 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lcov --version 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # IFS=.-: 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@336 -- # read -ra ver1 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # IFS=.-: 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@337 -- # read -ra ver2 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@338 -- # local 'op=<' 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@340 -- # ver1_l=2 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@341 -- # ver2_l=1 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@344 -- # case "$op" in 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@345 -- # : 1 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # decimal 1 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=1 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 1 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@365 -- # ver1[v]=1 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # decimal 2 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@353 -- # local d=2 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@355 -- # echo 2 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@366 -- # ver2[v]=2 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@368 -- # return 0 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:35.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.307 --rc genhtml_branch_coverage=1 00:27:35.307 --rc genhtml_function_coverage=1 00:27:35.307 --rc genhtml_legend=1 00:27:35.307 --rc geninfo_all_blocks=1 00:27:35.307 --rc geninfo_unexecuted_blocks=1 00:27:35.307 00:27:35.307 ' 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:35.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.307 --rc genhtml_branch_coverage=1 00:27:35.307 --rc genhtml_function_coverage=1 00:27:35.307 --rc genhtml_legend=1 00:27:35.307 --rc geninfo_all_blocks=1 00:27:35.307 --rc geninfo_unexecuted_blocks=1 00:27:35.307 00:27:35.307 ' 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:35.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.307 --rc genhtml_branch_coverage=1 00:27:35.307 --rc genhtml_function_coverage=1 00:27:35.307 --rc genhtml_legend=1 00:27:35.307 --rc geninfo_all_blocks=1 00:27:35.307 --rc geninfo_unexecuted_blocks=1 00:27:35.307 00:27:35.307 ' 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:35.307 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:35.307 --rc genhtml_branch_coverage=1 00:27:35.307 --rc genhtml_function_coverage=1 00:27:35.307 --rc genhtml_legend=1 00:27:35.307 --rc geninfo_all_blocks=1 00:27:35.307 --rc geninfo_unexecuted_blocks=1 00:27:35.307 00:27:35.307 ' 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:35.307 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@15 -- # shopt -s extglob 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@50 -- # : 0 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:35.308 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # discovery_port=8009 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@15 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@18 -- # nqn=nqn.2016-06.io.spdk:cnode 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # host_nqn=nqn.2021-12.io.spdk:test 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@21 -- # host_sock=/tmp/host.sock 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # nvmftestinit 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # remove_target_ns 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # xtrace_disable 00:27:35.308 10:45:15 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # pci_devs=() 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@131 -- # local -a pci_devs 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # pci_net_devs=() 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # pci_drivers=() 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@133 -- # local -A pci_drivers 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # net_devs=() 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@135 -- # local -ga net_devs 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # e810=() 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@136 -- # local -ga e810 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # x722=() 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@137 -- # local -ga x722 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # mlx=() 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@138 -- # local -ga mlx 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:41.874 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:41.874 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:41.874 Found net devices under 0000:86:00.0: cvl_0_0 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:41.874 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # [[ up == up ]] 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:41.875 Found net devices under 0000:86:00.1: cvl_0_1 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@262 -- # is_hw=yes 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@257 -- # create_target_ns 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@27 -- # local -gA dev_map 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@28 -- # local -g _dev 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # ips=() 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772161 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:27:41.875 10.0.0.1 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@11 -- # local val=167772162 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:27:41.875 10.0.0.2 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@38 -- # ping_ips 1 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:41.875 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:27:41.876 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:41.876 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.331 ms 00:27:41.876 00:27:41.876 --- 10.0.0.1 ping statistics --- 00:27:41.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.876 rtt min/avg/max/mdev = 0.331/0.331/0.331/0.000 ms 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target0 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:27:41.876 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:41.876 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:27:41.876 00:27:41.876 --- 10.0.0.2 ping statistics --- 00:27:41.876 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:41.876 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair++ )) 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # return 0 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator0 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=initiator1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # return 1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev= 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@169 -- # return 0 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target0 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target0 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # get_net_dev target1 00:27:41.876 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@107 -- # local dev=target1 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@109 -- # return 1 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@168 -- # dev= 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@169 -- # return 0 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@35 -- # nvmfappstart -m 0x2 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@726 -- # xtrace_disable 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@328 -- # nvmfpid=3383243 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # waitforlisten 3383243 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3383243 ']' 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:41.877 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.877 10:45:21 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.877 [2024-11-20 10:45:21.996607] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:27:41.877 [2024-11-20 10:45:21.996658] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:41.877 [2024-11-20 10:45:22.075389] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.877 [2024-11-20 10:45:22.115961] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:41.877 [2024-11-20 10:45:22.115994] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:41.877 [2024-11-20 10:45:22.116001] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:41.877 [2024-11-20 10:45:22.116008] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:41.877 [2024-11-20 10:45:22.116013] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:41.877 [2024-11-20 10:45:22.116577] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@732 -- # xtrace_disable 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@38 -- # rpc_cmd 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.877 [2024-11-20 10:45:22.263042] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:41.877 [2024-11-20 10:45:22.271225] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:27:41.877 null0 00:27:41.877 [2024-11-20 10:45:22.303217] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@54 -- # hostpid=3383330 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@55 -- # waitforlisten 3383330 /tmp/host.sock 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@835 -- # '[' -z 3383330 ']' 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@839 -- # local rpc_addr=/tmp/host.sock 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@840 -- # local max_retries=100 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:27:41.877 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@844 -- # xtrace_disable 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.877 [2024-11-20 10:45:22.369693] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:27:41.877 [2024-11-20 10:45:22.369730] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3383330 ] 00:27:41.877 [2024-11-20 10:45:22.441207] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:41.877 [2024-11-20 10:45:22.483791] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@868 -- # return 0 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@57 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@61 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:41.877 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:42.134 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:42.134 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@64 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:27:42.134 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:42.134 10:45:22 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:43.067 [2024-11-20 10:45:23.669328] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:43.067 [2024-11-20 10:45:23.669351] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:43.067 [2024-11-20 10:45:23.669368] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:43.067 [2024-11-20 10:45:23.757636] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:27:43.325 [2024-11-20 10:45:23.942693] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr was created to 10.0.0.2:4420 00:27:43.325 [2024-11-20 10:45:23.943460] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Connecting qpair 0x15399f0:1 started. 00:27:43.325 [2024-11-20 10:45:23.944764] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:43.325 [2024-11-20 10:45:23.944802] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:43.325 [2024-11-20 10:45:23.944819] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:43.325 [2024-11-20 10:45:23.944831] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:27:43.325 [2024-11-20 10:45:23.944848] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:43.325 10:45:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.325 10:45:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@67 -- # wait_for_bdev nvme0n1 00:27:43.325 10:45:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:43.325 10:45:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:43.325 10:45:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:43.325 10:45:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.325 10:45:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:43.325 10:45:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:43.325 10:45:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:43.325 10:45:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.325 [2024-11-20 10:45:23.989798] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpair 0x15399f0 was disconnected and freed. delete nvme_qpair. 00:27:43.325 10:45:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:27:43.325 10:45:23 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@70 -- # ip netns exec nvmf_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_1 00:27:43.325 10:45:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@71 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 down 00:27:43.582 10:45:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@74 -- # wait_for_bdev '' 00:27:43.582 10:45:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:43.582 10:45:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:43.582 10:45:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:43.582 10:45:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:43.582 10:45:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:43.582 10:45:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:43.582 10:45:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:43.582 10:45:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:43.582 10:45:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:27:43.582 10:45:24 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:44.515 10:45:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:44.515 10:45:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:44.516 10:45:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:44.516 10:45:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:44.516 10:45:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:44.516 10:45:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:44.516 10:45:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:44.516 10:45:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:44.516 10:45:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:27:44.516 10:45:25 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:45.886 10:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:45.886 10:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:45.886 10:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:45.886 10:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:45.886 10:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:45.887 10:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:45.887 10:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:45.887 10:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:45.887 10:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:27:45.887 10:45:26 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:46.819 10:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:46.819 10:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:46.819 10:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:46.819 10:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:46.819 10:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:46.819 10:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:46.819 10:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:46.819 10:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:46.819 10:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:27:46.819 10:45:27 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:47.752 10:45:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:47.752 10:45:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:47.752 10:45:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:47.752 10:45:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:47.752 10:45:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:47.752 10:45:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:47.752 10:45:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:47.752 10:45:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:47.752 10:45:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:27:47.752 10:45:28 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:48.684 10:45:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:48.684 10:45:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:48.684 10:45:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:48.684 10:45:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:48.684 10:45:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:48.684 10:45:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:48.684 10:45:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:48.684 10:45:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:48.684 [2024-11-20 10:45:29.386378] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:27:48.684 [2024-11-20 10:45:29.386411] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.684 [2024-11-20 10:45:29.386422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.684 [2024-11-20 10:45:29.386430] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.684 [2024-11-20 10:45:29.386441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.684 [2024-11-20 10:45:29.386448] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.684 [2024-11-20 10:45:29.386455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.684 [2024-11-20 10:45:29.386461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.684 [2024-11-20 10:45:29.386467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.684 [2024-11-20 10:45:29.386474] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:48.684 [2024-11-20 10:45:29.386481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:48.684 [2024-11-20 10:45:29.386487] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1516220 is same with the state(6) to be set 00:27:48.684 [2024-11-20 10:45:29.396402] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1516220 (9): Bad file descriptor 00:27:48.684 10:45:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:27:48.684 10:45:29 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:48.684 [2024-11-20 10:45:29.406434] bdev_nvme.c:2545:bdev_nvme_reset_destroy_qpairs: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Delete qpairs for reset. 00:27:48.684 [2024-11-20 10:45:29.406447] bdev_nvme.c:2533:bdev_nvme_reset_destroy_qpair_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] qpairs were deleted. 00:27:48.684 [2024-11-20 10:45:29.406451] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:48.684 [2024-11-20 10:45:29.406456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:48.684 [2024-11-20 10:45:29.406473] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:50.057 10:45:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:50.057 10:45:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:50.057 10:45:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:50.057 10:45:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.057 10:45:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:50.057 10:45:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.057 10:45:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:50.057 [2024-11-20 10:45:30.439310] posix.c:1054:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:27:50.057 [2024-11-20 10:45:30.439393] nvme_tcp.c:2288:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1516220 with addr=10.0.0.2, port=4420 00:27:50.057 [2024-11-20 10:45:30.439426] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1516220 is same with the state(6) to be set 00:27:50.057 [2024-11-20 10:45:30.439478] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1516220 (9): Bad file descriptor 00:27:50.057 [2024-11-20 10:45:30.440430] bdev_nvme.c:3168:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] Unable to perform failover, already in progress. 00:27:50.057 [2024-11-20 10:45:30.440493] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:50.057 [2024-11-20 10:45:30.440516] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:50.057 [2024-11-20 10:45:30.440549] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:50.057 [2024-11-20 10:45:30.440569] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:50.057 [2024-11-20 10:45:30.440586] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:50.057 [2024-11-20 10:45:30.440599] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:50.057 [2024-11-20 10:45:30.440621] bdev_nvme.c:2129:nvme_ctrlr_disconnect: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start disconnecting ctrlr. 00:27:50.057 [2024-11-20 10:45:30.440634] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:27:50.057 10:45:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.057 10:45:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme0n1 != '' ]] 00:27:50.057 10:45:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:50.990 [2024-11-20 10:45:31.443150] bdev_nvme.c:2517:bdev_nvme_reconnect_ctrlr: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Start reconnecting ctrlr. 00:27:50.990 [2024-11-20 10:45:31.443172] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:27:50.990 [2024-11-20 10:45:31.443182] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:27:50.990 [2024-11-20 10:45:31.443189] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:27:50.990 [2024-11-20 10:45:31.443196] nvme_ctrlr.c:1098:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] already in failed state 00:27:50.990 [2024-11-20 10:45:31.443207] bdev_nvme.c:2507:bdev_nvme_reconnect_ctrlr_poll: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] ctrlr could not be connected. 00:27:50.990 [2024-11-20 10:45:31.443212] bdev_nvme.c:2274:bdev_nvme_reset_ctrlr_complete: *INFO*: [nqn.2016-06.io.spdk:cnode0, 1] Clear pending resets. 00:27:50.990 [2024-11-20 10:45:31.443215] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:27:50.990 [2024-11-20 10:45:31.443234] bdev_nvme.c:7229:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:27:50.990 [2024-11-20 10:45:31.443256] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.990 [2024-11-20 10:45:31.443266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.990 [2024-11-20 10:45:31.443275] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.990 [2024-11-20 10:45:31.443283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.990 [2024-11-20 10:45:31.443290] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.990 [2024-11-20 10:45:31.443297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.990 [2024-11-20 10:45:31.443304] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.990 [2024-11-20 10:45:31.443310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.990 [2024-11-20 10:45:31.443317] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:27:50.990 [2024-11-20 10:45:31.443324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:50.990 [2024-11-20 10:45:31.443334] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] in failed state. 00:27:50.991 [2024-11-20 10:45:31.443715] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1505900 (9): Bad file descriptor 00:27:50.991 [2024-11-20 10:45:31.444725] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:27:50.991 [2024-11-20 10:45:31.444735] nvme_ctrlr.c:1217:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery, 1] Failed to read the CC register 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != '' ]] 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@77 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@78 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@81 -- # wait_for_bdev nvme1n1 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:50.991 10:45:31 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:51.923 10:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:51.923 10:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:51.923 10:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:51.923 10:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:51.923 10:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:51.923 10:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:51.923 10:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:52.179 10:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:52.179 10:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:52.179 10:45:32 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:52.745 [2024-11-20 10:45:33.454155] bdev_nvme.c:7478:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:27:52.745 [2024-11-20 10:45:33.454171] bdev_nvme.c:7564:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:27:52.745 [2024-11-20 10:45:33.454182] bdev_nvme.c:7441:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:27:53.003 [2024-11-20 10:45:33.581602] bdev_nvme.c:7407:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:27:53.003 10:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:53.003 10:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:53.003 10:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:53.003 10:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:53.003 10:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:53.003 10:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:53.003 10:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:53.003 10:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:53.260 10:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:27:53.260 10:45:33 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sleep 1 00:27:53.260 [2024-11-20 10:45:33.798731] bdev_nvme.c:5634:nvme_ctrlr_create_done: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] ctrlr was created to 10.0.0.2:4420 00:27:53.260 [2024-11-20 10:45:33.799371] bdev_nvme.c:1985:bdev_nvme_create_qpair: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] Connecting qpair 0x1515080:1 started. 00:27:53.260 [2024-11-20 10:45:33.800403] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:27:53.260 [2024-11-20 10:45:33.800433] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:27:53.260 [2024-11-20 10:45:33.800448] bdev_nvme.c:8274:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:27:53.260 [2024-11-20 10:45:33.800460] bdev_nvme.c:7297:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:27:53.260 [2024-11-20 10:45:33.800467] bdev_nvme.c:7256:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:27:53.260 [2024-11-20 10:45:33.803988] bdev_nvme.c:1791:bdev_nvme_disconnected_qpair_cb: *INFO*: [nqn.2016-06.io.spdk:cnode0, 2] qpair 0x1515080 was disconnected and freed. delete nvme_qpair. 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # get_bdev_list 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # jq -r '.[].name' 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@563 -- # xtrace_disable 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # sort 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@24 -- # xargs 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@28 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@85 -- # killprocess 3383330 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3383330 ']' 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3383330 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3383330 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3383330' 00:27:54.191 killing process with pid 3383330 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3383330 00:27:54.191 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3383330 00:27:54.449 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # nvmftestfini 00:27:54.449 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # nvmfcleanup 00:27:54.449 10:45:34 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@99 -- # sync 00:27:54.449 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:27:54.449 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@102 -- # set +e 00:27:54.449 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@103 -- # for i in {1..20} 00:27:54.449 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:27:54.449 rmmod nvme_tcp 00:27:54.449 rmmod nvme_fabrics 00:27:54.449 rmmod nvme_keyring 00:27:54.449 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:27:54.449 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@106 -- # set -e 00:27:54.449 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@107 -- # return 0 00:27:54.449 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@336 -- # '[' -n 3383243 ']' 00:27:54.449 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@337 -- # killprocess 3383243 00:27:54.449 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # '[' -z 3383243 ']' 00:27:54.449 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # kill -0 3383243 00:27:54.449 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # uname 00:27:54.449 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:27:54.449 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3383243 00:27:54.449 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:27:54.449 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:27:54.449 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3383243' 00:27:54.449 killing process with pid 3383243 00:27:54.449 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@973 -- # kill 3383243 00:27:54.449 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@978 -- # wait 3383243 00:27:54.707 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:27:54.707 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # nvmf_fini 00:27:54.708 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@264 -- # local dev 00:27:54.708 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@267 -- # remove_target_ns 00:27:54.708 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:54.708 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:54.708 10:45:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:56.609 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@268 -- # delete_main_bridge 00:27:56.609 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:27:56.609 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@130 -- # return 0 00:27:56.609 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:56.609 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:27:56.609 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:56.609 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:27:56.609 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:27:56.609 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:56.610 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:27:56.610 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:27:56.610 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:27:56.610 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:27:56.610 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:27:56.610 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:27:56.610 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:27:56.610 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:27:56.610 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:27:56.610 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # _dev=0 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@41 -- # dev_map=() 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/setup.sh@284 -- # iptr 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@542 -- # iptables-save 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@542 -- # iptables-restore 00:27:56.868 00:27:56.868 real 0m21.711s 00:27:56.868 user 0m26.913s 00:27:56.868 sys 0m6.004s 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:27:56.868 ************************************ 00:27:56.868 END TEST nvmf_discovery_remove_ifc 00:27:56.868 ************************************ 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@34 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1111 -- # xtrace_disable 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:56.868 ************************************ 00:27:56.868 START TEST nvmf_multicontroller 00:27:56.868 ************************************ 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:27:56.868 * Looking for test storage... 00:27:56.868 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lcov --version 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@333 -- # local ver1 ver1_l 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@334 -- # local ver2 ver2_l 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # IFS=.-: 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@336 -- # read -ra ver1 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # IFS=.-: 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@337 -- # read -ra ver2 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@338 -- # local 'op=<' 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@340 -- # ver1_l=2 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@341 -- # ver2_l=1 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@344 -- # case "$op" in 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@345 -- # : 1 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v = 0 )) 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # decimal 1 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=1 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 1 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@365 -- # ver1[v]=1 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # decimal 2 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@353 -- # local d=2 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@355 -- # echo 2 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@366 -- # ver2[v]=2 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@368 -- # return 0 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:27:56.868 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:27:56.868 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:56.868 --rc genhtml_branch_coverage=1 00:27:56.868 --rc genhtml_function_coverage=1 00:27:56.868 --rc genhtml_legend=1 00:27:56.868 --rc geninfo_all_blocks=1 00:27:56.868 --rc geninfo_unexecuted_blocks=1 00:27:56.868 00:27:56.868 ' 00:27:57.127 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:27:57.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.127 --rc genhtml_branch_coverage=1 00:27:57.127 --rc genhtml_function_coverage=1 00:27:57.127 --rc genhtml_legend=1 00:27:57.127 --rc geninfo_all_blocks=1 00:27:57.127 --rc geninfo_unexecuted_blocks=1 00:27:57.127 00:27:57.127 ' 00:27:57.127 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:27:57.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.127 --rc genhtml_branch_coverage=1 00:27:57.127 --rc genhtml_function_coverage=1 00:27:57.127 --rc genhtml_legend=1 00:27:57.127 --rc geninfo_all_blocks=1 00:27:57.127 --rc geninfo_unexecuted_blocks=1 00:27:57.127 00:27:57.127 ' 00:27:57.127 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:27:57.127 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:27:57.127 --rc genhtml_branch_coverage=1 00:27:57.127 --rc genhtml_function_coverage=1 00:27:57.127 --rc genhtml_legend=1 00:27:57.127 --rc geninfo_all_blocks=1 00:27:57.127 --rc geninfo_unexecuted_blocks=1 00:27:57.127 00:27:57.127 ' 00:27:57.127 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:57.127 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:27:57.127 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:57.127 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:57.127 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:57.127 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:57.127 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:57.127 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:27:57.127 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:57.127 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:27:57.127 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:27:57.127 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:27:57.127 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:57.127 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:27:57.127 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:27:57.127 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@15 -- # shopt -s extglob 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@50 -- # : 0 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:27:57.128 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@54 -- # have_pci_nics=0 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # nvmftestinit 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # prepare_net_devs 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # local -g is_hw=no 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # remove_target_ns 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # xtrace_disable 00:27:57.128 10:45:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@131 -- # pci_devs=() 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@135 -- # net_devs=() 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@136 -- # e810=() 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@136 -- # local -ga e810 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@137 -- # x722=() 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@137 -- # local -ga x722 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@138 -- # mlx=() 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@138 -- # local -ga mlx 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:03.694 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:03.694 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:03.694 Found net devices under 0000:86:00.0: cvl_0_0 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:03.694 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:03.695 Found net devices under 0000:86:00.1: cvl_0_1 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@262 -- # is_hw=yes 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@257 -- # create_target_ns 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@28 -- # local -g _dev 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # ips=() 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772161 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:03.695 10.0.0.1 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@11 -- # local val=167772162 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:03.695 10.0.0.2 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=initiator0 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:28:03.695 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:03.696 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:03.696 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.337 ms 00:28:03.696 00:28:03.696 --- 10.0.0.1 ping statistics --- 00:28:03.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.696 rtt min/avg/max/mdev = 0.337/0.337/0.337/0.000 ms 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev target0 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=target0 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:28:03.696 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:03.696 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.190 ms 00:28:03.696 00:28:03.696 --- 10.0.0.2 ping statistics --- 00:28:03.696 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:03.696 rtt min/avg/max/mdev = 0.190/0.190/0.190/0.000 ms 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # (( pair++ )) 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # return 0 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=initiator0 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=initiator1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # return 1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev= 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@169 -- # return 0 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev target0 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=target0 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # get_net_dev target1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@107 -- # local dev=target1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@109 -- # return 1 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@168 -- # dev= 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@169 -- # return 0 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:03.696 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:03.697 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:03.697 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:03.697 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:03.697 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@20 -- # nvmfappstart -m 0xE 00:28:03.697 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:03.697 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:03.697 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.697 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@328 -- # nvmfpid=3389026 00:28:03.697 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:28:03.697 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # waitforlisten 3389026 00:28:03.697 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3389026 ']' 00:28:03.697 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:03.697 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:03.697 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:03.697 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:03.697 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:03.697 10:45:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.697 [2024-11-20 10:45:43.786457] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:28:03.697 [2024-11-20 10:45:43.786501] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:03.697 [2024-11-20 10:45:43.864623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:03.697 [2024-11-20 10:45:43.905799] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:03.697 [2024-11-20 10:45:43.905838] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:03.697 [2024-11-20 10:45:43.905845] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:03.697 [2024-11-20 10:45:43.905851] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:03.697 [2024-11-20 10:45:43.905856] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:03.697 [2024-11-20 10:45:43.907250] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:03.697 [2024-11-20 10:45:43.907378] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:03.697 [2024-11-20 10:45:43.907379] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:03.955 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:03.955 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:28:03.955 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:03.955 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:03.955 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.955 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:03.955 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:03.955 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.955 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:03.955 [2024-11-20 10:45:44.659623] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:03.955 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:03.955 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:28:03.955 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:03.955 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.213 Malloc0 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.213 [2024-11-20 10:45:44.727272] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.213 [2024-11-20 10:45:44.735206] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.213 Malloc1 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@32 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@39 -- # bdevperf_pid=3389141 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@42 -- # waitforlisten 3389141 /var/tmp/bdevperf.sock 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@835 -- # '[' -z 3389141 ']' 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:28:04.213 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:04.213 10:45:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@868 -- # return 0 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@45 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.471 NVMe0n1 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@49 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@49 -- # grep -c NVMe 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.471 1 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@55 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -q nqn.2021-09-7.io.spdk:00001 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.471 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.471 request: 00:28:04.471 { 00:28:04.471 "name": "NVMe0", 00:28:04.471 "trtype": "tcp", 00:28:04.471 "traddr": "10.0.0.2", 00:28:04.471 "adrfam": "ipv4", 00:28:04.471 "trsvcid": "4420", 00:28:04.471 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.471 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:28:04.471 "hostaddr": "10.0.0.1", 00:28:04.471 "prchk_reftag": false, 00:28:04.471 "prchk_guard": false, 00:28:04.471 "hdgst": false, 00:28:04.471 "ddgst": false, 00:28:04.471 "allow_unrecognized_csi": false, 00:28:04.471 "method": "bdev_nvme_attach_controller", 00:28:04.729 "req_id": 1 00:28:04.729 } 00:28:04.729 Got JSON-RPC error response 00:28:04.729 response: 00:28:04.729 { 00:28:04.729 "code": -114, 00:28:04.729 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:04.729 } 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.1 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.729 request: 00:28:04.729 { 00:28:04.729 "name": "NVMe0", 00:28:04.729 "trtype": "tcp", 00:28:04.729 "traddr": "10.0.0.2", 00:28:04.729 "adrfam": "ipv4", 00:28:04.729 "trsvcid": "4420", 00:28:04.729 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:04.729 "hostaddr": "10.0.0.1", 00:28:04.729 "prchk_reftag": false, 00:28:04.729 "prchk_guard": false, 00:28:04.729 "hdgst": false, 00:28:04.729 "ddgst": false, 00:28:04.729 "allow_unrecognized_csi": false, 00:28:04.729 "method": "bdev_nvme_attach_controller", 00:28:04.729 "req_id": 1 00:28:04.729 } 00:28:04.729 Got JSON-RPC error response 00:28:04.729 response: 00:28:04.729 { 00:28:04.729 "code": -114, 00:28:04.729 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:04.729 } 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@64 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x disable 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.729 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.729 request: 00:28:04.729 { 00:28:04.729 "name": "NVMe0", 00:28:04.729 "trtype": "tcp", 00:28:04.729 "traddr": "10.0.0.2", 00:28:04.729 "adrfam": "ipv4", 00:28:04.729 "trsvcid": "4420", 00:28:04.729 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.729 "hostaddr": "10.0.0.1", 00:28:04.729 "prchk_reftag": false, 00:28:04.729 "prchk_guard": false, 00:28:04.730 "hdgst": false, 00:28:04.730 "ddgst": false, 00:28:04.730 "multipath": "disable", 00:28:04.730 "allow_unrecognized_csi": false, 00:28:04.730 "method": "bdev_nvme_attach_controller", 00:28:04.730 "req_id": 1 00:28:04.730 } 00:28:04.730 Got JSON-RPC error response 00:28:04.730 response: 00:28:04.730 { 00:28:04.730 "code": -114, 00:28:04.730 "message": "A controller named NVMe0 already exists and multipath is disabled" 00:28:04.730 } 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@652 -- # local es=0 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 -x failover 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.730 request: 00:28:04.730 { 00:28:04.730 "name": "NVMe0", 00:28:04.730 "trtype": "tcp", 00:28:04.730 "traddr": "10.0.0.2", 00:28:04.730 "adrfam": "ipv4", 00:28:04.730 "trsvcid": "4420", 00:28:04.730 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:04.730 "hostaddr": "10.0.0.1", 00:28:04.730 "prchk_reftag": false, 00:28:04.730 "prchk_guard": false, 00:28:04.730 "hdgst": false, 00:28:04.730 "ddgst": false, 00:28:04.730 "multipath": "failover", 00:28:04.730 "allow_unrecognized_csi": false, 00:28:04.730 "method": "bdev_nvme_attach_controller", 00:28:04.730 "req_id": 1 00:28:04.730 } 00:28:04.730 Got JSON-RPC error response 00:28:04.730 response: 00:28:04.730 { 00:28:04.730 "code": -114, 00:28:04.730 "message": "A controller named NVMe0 already exists with the specified network path" 00:28:04.730 } 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@655 -- # es=1 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.730 NVMe0n1 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@78 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.730 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.988 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.988 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@82 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.1 00:28:04.988 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.988 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.988 00:28:04.988 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.988 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@85 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:28:04.988 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@85 -- # grep -c NVMe 00:28:04.988 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:04.988 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:04.988 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:04.988 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@85 -- # '[' 2 '!=' 2 ']' 00:28:04.988 10:45:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:28:06.360 { 00:28:06.360 "results": [ 00:28:06.360 { 00:28:06.360 "job": "NVMe0n1", 00:28:06.360 "core_mask": "0x1", 00:28:06.360 "workload": "write", 00:28:06.360 "status": "finished", 00:28:06.360 "queue_depth": 128, 00:28:06.360 "io_size": 4096, 00:28:06.360 "runtime": 1.004117, 00:28:06.360 "iops": 25236.10296409681, 00:28:06.360 "mibps": 98.57852720350317, 00:28:06.360 "io_failed": 0, 00:28:06.360 "io_timeout": 0, 00:28:06.360 "avg_latency_us": 5066.161623031534, 00:28:06.360 "min_latency_us": 2527.8171428571427, 00:28:06.360 "max_latency_us": 8800.548571428571 00:28:06.360 } 00:28:06.360 ], 00:28:06.360 "core_count": 1 00:28:06.360 } 00:28:06.360 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@93 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:28:06.360 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.360 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:06.360 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # [[ -n '' ]] 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@111 -- # killprocess 3389141 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3389141 ']' 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3389141 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3389141 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3389141' 00:28:06.361 killing process with pid 3389141 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3389141 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3389141 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@113 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@114 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@116 -- # trap - SIGINT SIGTERM EXIT 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@118 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1598 -- # sort -u 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1600 -- # cat 00:28:06.361 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:06.361 [2024-11-20 10:45:44.839406] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:28:06.361 [2024-11-20 10:45:44.839462] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3389141 ] 00:28:06.361 [2024-11-20 10:45:44.915246] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:06.361 [2024-11-20 10:45:44.958621] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:28:06.361 [2024-11-20 10:45:45.530745] bdev.c:4700:bdev_name_add: *ERROR*: Bdev name f0375336-bea6-4492-b2bd-602322f6b053 already exists 00:28:06.361 [2024-11-20 10:45:45.530771] bdev.c:7838:bdev_register: *ERROR*: Unable to add uuid:f0375336-bea6-4492-b2bd-602322f6b053 alias for bdev NVMe1n1 00:28:06.361 [2024-11-20 10:45:45.530779] bdev_nvme.c:4658:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:28:06.361 Running I/O for 1 seconds... 00:28:06.361 25212.00 IOPS, 98.48 MiB/s 00:28:06.361 Latency(us) 00:28:06.361 [2024-11-20T09:45:47.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.361 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:28:06.361 NVMe0n1 : 1.00 25236.10 98.58 0.00 0.00 5066.16 2527.82 8800.55 00:28:06.361 [2024-11-20T09:45:47.092Z] =================================================================================================================== 00:28:06.361 [2024-11-20T09:45:47.092Z] Total : 25236.10 98.58 0.00 0.00 5066.16 2527.82 8800.55 00:28:06.361 Received shutdown signal, test time was about 1.000000 seconds 00:28:06.361 00:28:06.361 Latency(us) 00:28:06.361 [2024-11-20T09:45:47.092Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:28:06.361 [2024-11-20T09:45:47.092Z] =================================================================================================================== 00:28:06.361 [2024-11-20T09:45:47.092Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:28:06.361 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1605 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1599 -- # read -r file 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@119 -- # nvmftestfini 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@99 -- # sync 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@102 -- # set +e 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:06.361 rmmod nvme_tcp 00:28:06.361 rmmod nvme_fabrics 00:28:06.361 rmmod nvme_keyring 00:28:06.361 10:45:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:06.361 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@106 -- # set -e 00:28:06.361 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@107 -- # return 0 00:28:06.361 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@336 -- # '[' -n 3389026 ']' 00:28:06.361 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@337 -- # killprocess 3389026 00:28:06.361 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # '[' -z 3389026 ']' 00:28:06.361 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # kill -0 3389026 00:28:06.361 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # uname 00:28:06.361 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:06.361 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3389026 00:28:06.361 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:06.361 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:06.361 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3389026' 00:28:06.361 killing process with pid 3389026 00:28:06.361 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@973 -- # kill 3389026 00:28:06.361 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@978 -- # wait 3389026 00:28:06.620 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:06.620 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # nvmf_fini 00:28:06.620 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@264 -- # local dev 00:28:06.620 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@267 -- # remove_target_ns 00:28:06.620 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:06.620 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:06.620 10:45:47 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@268 -- # delete_main_bridge 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@130 -- # return 0 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # _dev=0 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@41 -- # dev_map=() 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/setup.sh@284 -- # iptr 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@542 -- # iptables-save 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@542 -- # iptables-restore 00:28:09.156 00:28:09.156 real 0m11.918s 00:28:09.156 user 0m14.190s 00:28:09.156 sys 0m5.301s 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:28:09.156 ************************************ 00:28:09.156 END TEST nvmf_multicontroller 00:28:09.156 ************************************ 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@37 -- # [[ tcp == \r\d\m\a ]] 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # [[ 0 -eq 1 ]] 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:28:09.156 00:28:09.156 real 5m57.140s 00:28:09.156 user 10m41.913s 00:28:09.156 sys 1m59.867s 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:09.156 10:45:49 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:28:09.156 ************************************ 00:28:09.156 END TEST nvmf_host 00:28:09.156 ************************************ 00:28:09.156 10:45:49 nvmf_tcp -- nvmf/nvmf.sh@15 -- # [[ tcp = \t\c\p ]] 00:28:09.156 10:45:49 nvmf_tcp -- nvmf/nvmf.sh@15 -- # [[ 0 -eq 0 ]] 00:28:09.156 10:45:49 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_target_core_interrupt_mode /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:09.156 10:45:49 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:09.156 10:45:49 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:09.156 10:45:49 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:28:09.156 ************************************ 00:28:09.156 START TEST nvmf_target_core_interrupt_mode 00:28:09.156 ************************************ 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp --interrupt-mode 00:28:09.156 * Looking for test storage... 00:28:09.156 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lcov --version 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # IFS=.-: 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@336 -- # read -ra ver1 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # IFS=.-: 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@337 -- # read -ra ver2 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@338 -- # local 'op=<' 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@340 -- # ver1_l=2 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@341 -- # ver2_l=1 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@344 -- # case "$op" in 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@345 -- # : 1 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # decimal 1 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=1 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 1 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@365 -- # ver1[v]=1 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # decimal 2 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@353 -- # local d=2 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@355 -- # echo 2 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@366 -- # ver2[v]=2 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@368 -- # return 0 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:09.156 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:09.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.156 --rc genhtml_branch_coverage=1 00:28:09.156 --rc genhtml_function_coverage=1 00:28:09.156 --rc genhtml_legend=1 00:28:09.156 --rc geninfo_all_blocks=1 00:28:09.156 --rc geninfo_unexecuted_blocks=1 00:28:09.156 00:28:09.156 ' 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:09.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.157 --rc genhtml_branch_coverage=1 00:28:09.157 --rc genhtml_function_coverage=1 00:28:09.157 --rc genhtml_legend=1 00:28:09.157 --rc geninfo_all_blocks=1 00:28:09.157 --rc geninfo_unexecuted_blocks=1 00:28:09.157 00:28:09.157 ' 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:09.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.157 --rc genhtml_branch_coverage=1 00:28:09.157 --rc genhtml_function_coverage=1 00:28:09.157 --rc genhtml_legend=1 00:28:09.157 --rc geninfo_all_blocks=1 00:28:09.157 --rc geninfo_unexecuted_blocks=1 00:28:09.157 00:28:09.157 ' 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:09.157 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.157 --rc genhtml_branch_coverage=1 00:28:09.157 --rc genhtml_function_coverage=1 00:28:09.157 --rc genhtml_legend=1 00:28:09.157 --rc geninfo_all_blocks=1 00:28:09.157 --rc geninfo_unexecuted_blocks=1 00:28:09.157 00:28:09.157 ' 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # uname -s 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@15 -- # shopt -s extglob 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@5 -- # export PATH 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@50 -- # : 0 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@13 -- # TEST_ARGS=("$@") 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@15 -- # [[ 0 -eq 0 ]] 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@16 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:09.157 ************************************ 00:28:09.157 START TEST nvmf_abort 00:28:09.157 ************************************ 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp --interrupt-mode 00:28:09.157 * Looking for test storage... 00:28:09.157 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lcov --version 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # IFS=.-: 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@336 -- # read -ra ver1 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # IFS=.-: 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@337 -- # read -ra ver2 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@338 -- # local 'op=<' 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@340 -- # ver1_l=2 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@341 -- # ver2_l=1 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@344 -- # case "$op" in 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@345 -- # : 1 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # decimal 1 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=1 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 1 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@365 -- # ver1[v]=1 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # decimal 2 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@353 -- # local d=2 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:09.157 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@355 -- # echo 2 00:28:09.158 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@366 -- # ver2[v]=2 00:28:09.158 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:09.158 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:09.158 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@368 -- # return 0 00:28:09.158 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:09.158 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:09.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.158 --rc genhtml_branch_coverage=1 00:28:09.158 --rc genhtml_function_coverage=1 00:28:09.158 --rc genhtml_legend=1 00:28:09.158 --rc geninfo_all_blocks=1 00:28:09.158 --rc geninfo_unexecuted_blocks=1 00:28:09.158 00:28:09.158 ' 00:28:09.158 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:09.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.158 --rc genhtml_branch_coverage=1 00:28:09.158 --rc genhtml_function_coverage=1 00:28:09.158 --rc genhtml_legend=1 00:28:09.158 --rc geninfo_all_blocks=1 00:28:09.158 --rc geninfo_unexecuted_blocks=1 00:28:09.158 00:28:09.158 ' 00:28:09.158 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:09.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.158 --rc genhtml_branch_coverage=1 00:28:09.158 --rc genhtml_function_coverage=1 00:28:09.158 --rc genhtml_legend=1 00:28:09.158 --rc geninfo_all_blocks=1 00:28:09.158 --rc geninfo_unexecuted_blocks=1 00:28:09.158 00:28:09.158 ' 00:28:09.158 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:09.158 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:09.158 --rc genhtml_branch_coverage=1 00:28:09.158 --rc genhtml_function_coverage=1 00:28:09.158 --rc genhtml_legend=1 00:28:09.158 --rc geninfo_all_blocks=1 00:28:09.158 --rc geninfo_unexecuted_blocks=1 00:28:09.158 00:28:09.158 ' 00:28:09.158 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:09.158 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:28:09.158 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:09.158 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:09.158 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:09.158 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:09.158 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:09.158 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:09.158 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@15 -- # shopt -s extglob 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@50 -- # : 0 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@260 -- # remove_target_ns 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@125 -- # xtrace_disable 00:28:09.416 10:45:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@131 -- # pci_devs=() 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@135 -- # net_devs=() 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@136 -- # e810=() 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@136 -- # local -ga e810 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@137 -- # x722=() 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@137 -- # local -ga x722 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@138 -- # mlx=() 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@138 -- # local -ga mlx 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:15.983 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:15.983 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:15.983 Found net devices under 0000:86:00.0: cvl_0_0 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:15.983 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:15.984 Found net devices under 0000:86:00.1: cvl_0_1 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@262 -- # is_hw=yes 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@257 -- # create_target_ns 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@28 -- # local -g _dev 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # ips=() 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772161 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:15.984 10.0.0.1 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@11 -- # local val=167772162 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:15.984 10.0.0.2 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:15.984 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator0 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:15.985 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:15.985 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.437 ms 00:28:15.985 00:28:15.985 --- 10.0.0.1 ping statistics --- 00:28:15.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.985 rtt min/avg/max/mdev = 0.437/0.437/0.437/0.000 ms 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:28:15.985 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:15.985 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:28:15.985 00:28:15.985 --- 10.0.0.2 ping statistics --- 00:28:15.985 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:15.985 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair++ )) 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@270 -- # return 0 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator0 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:28:15.985 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=initiator1 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # return 1 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev= 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@169 -- # return 0 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target0 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target0 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # get_net_dev target1 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@107 -- # local dev=target1 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@109 -- # return 1 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@168 -- # dev= 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@169 -- # return 0 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@328 -- # nvmfpid=3393132 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@329 -- # waitforlisten 3393132 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@835 -- # '[' -z 3393132 ']' 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:15.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:15.986 10:45:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:15.986 [2024-11-20 10:45:56.006649] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:15.986 [2024-11-20 10:45:56.007631] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:28:15.986 [2024-11-20 10:45:56.007669] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:15.986 [2024-11-20 10:45:56.088342] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:15.986 [2024-11-20 10:45:56.129967] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:15.986 [2024-11-20 10:45:56.130002] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:15.986 [2024-11-20 10:45:56.130009] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:15.986 [2024-11-20 10:45:56.130014] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:15.986 [2024-11-20 10:45:56.130019] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:15.986 [2024-11-20 10:45:56.131457] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:15.986 [2024-11-20 10:45:56.131564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:15.986 [2024-11-20 10:45:56.131564] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:15.986 [2024-11-20 10:45:56.198157] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:15.986 [2024-11-20 10:45:56.199016] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:15.986 [2024-11-20 10:45:56.199417] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:15.986 [2024-11-20 10:45:56.199519] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@868 -- # return 0 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:16.245 [2024-11-20 10:45:56.888281] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:16.245 Malloc0 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:16.245 Delay0 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.245 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:16.245 [2024-11-20 10:45:56.972226] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:16.503 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.503 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:16.503 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:16.503 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:16.503 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:16.503 10:45:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:28:16.503 [2024-11-20 10:45:57.059353] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:28:18.400 Initializing NVMe Controllers 00:28:18.400 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:28:18.400 controller IO queue size 128 less than required 00:28:18.400 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:28:18.401 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:28:18.401 Initialization complete. Launching workers. 00:28:18.401 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 38117 00:28:18.401 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 38174, failed to submit 66 00:28:18.401 success 38117, unsuccessful 57, failed 0 00:28:18.401 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:18.401 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:28:18.401 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:18.401 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:28:18.401 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:28:18.401 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:28:18.401 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@335 -- # nvmfcleanup 00:28:18.401 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@99 -- # sync 00:28:18.401 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:28:18.401 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@102 -- # set +e 00:28:18.401 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@103 -- # for i in {1..20} 00:28:18.401 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:28:18.401 rmmod nvme_tcp 00:28:18.401 rmmod nvme_fabrics 00:28:18.658 rmmod nvme_keyring 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@106 -- # set -e 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@107 -- # return 0 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@336 -- # '[' -n 3393132 ']' 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@337 -- # killprocess 3393132 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@954 -- # '[' -z 3393132 ']' 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@958 -- # kill -0 3393132 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # uname 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3393132 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3393132' 00:28:18.658 killing process with pid 3393132 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@973 -- # kill 3393132 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@978 -- # wait 3393132 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@342 -- # nvmf_fini 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@264 -- # local dev 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@267 -- # remove_target_ns 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:18.658 10:45:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:21.191 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@268 -- # delete_main_bridge 00:28:21.191 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:28:21.191 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@130 -- # return 0 00:28:21.191 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:28:21.191 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # _dev=0 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@41 -- # dev_map=() 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/setup.sh@284 -- # iptr 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@542 -- # iptables-save 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- nvmf/common.sh@542 -- # iptables-restore 00:28:21.192 00:28:21.192 real 0m11.760s 00:28:21.192 user 0m10.235s 00:28:21.192 sys 0m5.738s 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:28:21.192 ************************************ 00:28:21.192 END TEST nvmf_abort 00:28:21.192 ************************************ 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@17 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:28:21.192 ************************************ 00:28:21.192 START TEST nvmf_ns_hotplug_stress 00:28:21.192 ************************************ 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp --interrupt-mode 00:28:21.192 * Looking for test storage... 00:28:21.192 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lcov --version 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@333 -- # local ver1 ver1_l 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@334 -- # local ver2 ver2_l 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # IFS=.-: 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@336 -- # read -ra ver1 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # IFS=.-: 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@337 -- # read -ra ver2 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@338 -- # local 'op=<' 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@340 -- # ver1_l=2 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@341 -- # ver2_l=1 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@344 -- # case "$op" in 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@345 -- # : 1 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v = 0 )) 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # decimal 1 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=1 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 1 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@365 -- # ver1[v]=1 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # decimal 2 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@353 -- # local d=2 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@355 -- # echo 2 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@366 -- # ver2[v]=2 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@368 -- # return 0 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:28:21.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.192 --rc genhtml_branch_coverage=1 00:28:21.192 --rc genhtml_function_coverage=1 00:28:21.192 --rc genhtml_legend=1 00:28:21.192 --rc geninfo_all_blocks=1 00:28:21.192 --rc geninfo_unexecuted_blocks=1 00:28:21.192 00:28:21.192 ' 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:28:21.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.192 --rc genhtml_branch_coverage=1 00:28:21.192 --rc genhtml_function_coverage=1 00:28:21.192 --rc genhtml_legend=1 00:28:21.192 --rc geninfo_all_blocks=1 00:28:21.192 --rc geninfo_unexecuted_blocks=1 00:28:21.192 00:28:21.192 ' 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:28:21.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.192 --rc genhtml_branch_coverage=1 00:28:21.192 --rc genhtml_function_coverage=1 00:28:21.192 --rc genhtml_legend=1 00:28:21.192 --rc geninfo_all_blocks=1 00:28:21.192 --rc geninfo_unexecuted_blocks=1 00:28:21.192 00:28:21.192 ' 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:28:21.192 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:28:21.192 --rc genhtml_branch_coverage=1 00:28:21.192 --rc genhtml_function_coverage=1 00:28:21.192 --rc genhtml_legend=1 00:28:21.192 --rc geninfo_all_blocks=1 00:28:21.192 --rc geninfo_unexecuted_blocks=1 00:28:21.192 00:28:21.192 ' 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:28:21.192 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@15 -- # shopt -s extglob 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@50 -- # : 0 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@54 -- # have_pci_nics=0 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # prepare_net_devs 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # local -g is_hw=no 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # remove_target_ns 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # xtrace_disable 00:28:21.193 10:46:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:27.780 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:27.780 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # pci_devs=() 00:28:27.780 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@131 -- # local -a pci_devs 00:28:27.780 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # pci_net_devs=() 00:28:27.780 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:28:27.780 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # pci_drivers=() 00:28:27.780 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@133 -- # local -A pci_drivers 00:28:27.780 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # net_devs=() 00:28:27.780 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@135 -- # local -ga net_devs 00:28:27.780 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # e810=() 00:28:27.780 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@136 -- # local -ga e810 00:28:27.780 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # x722=() 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@137 -- # local -ga x722 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # mlx=() 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@138 -- # local -ga mlx 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:27.781 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:27.781 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:27.781 Found net devices under 0000:86:00.0: cvl_0_0 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # [[ up == up ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:27.781 Found net devices under 0000:86:00.1: cvl_0_1 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@262 -- # is_hw=yes 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@257 -- # create_target_ns 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@27 -- # local -gA dev_map 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@28 -- # local -g _dev 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # ips=() 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:28:27.781 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772161 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:28:27.782 10.0.0.1 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@11 -- # local val=167772162 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:28:27.782 10.0.0.2 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@38 -- # ping_ips 1 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:28:27.782 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:27.782 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.474 ms 00:28:27.782 00:28:27.782 --- 10.0.0.1 ping statistics --- 00:28:27.782 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.782 rtt min/avg/max/mdev = 0.474/0.474/0.474/0.000 ms 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:28:27.782 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:28:27.783 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:27.783 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.222 ms 00:28:27.783 00:28:27.783 --- 10.0.0.2 ping statistics --- 00:28:27.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:27.783 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair++ )) 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # return 0 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator0 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=initiator1 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # return 1 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev= 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@169 -- # return 0 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target0 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target0 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # get_net_dev target1 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@107 -- # local dev=target1 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@109 -- # return 1 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@168 -- # dev= 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@169 -- # return 0 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@726 -- # xtrace_disable 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@328 -- # nvmfpid=3397166 00:28:27.783 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # waitforlisten 3397166 00:28:27.784 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xE 00:28:27.784 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@835 -- # '[' -z 3397166 ']' 00:28:27.784 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:27.784 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@840 -- # local max_retries=100 00:28:27.784 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:27.784 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:27.784 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@844 -- # xtrace_disable 00:28:27.784 10:46:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:27.784 [2024-11-20 10:46:07.852746] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:28:27.784 [2024-11-20 10:46:07.853686] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:28:27.784 [2024-11-20 10:46:07.853718] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:27.784 [2024-11-20 10:46:07.929612] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:28:27.784 [2024-11-20 10:46:07.970848] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:27.784 [2024-11-20 10:46:07.970883] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:27.784 [2024-11-20 10:46:07.970890] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:27.784 [2024-11-20 10:46:07.970897] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:27.784 [2024-11-20 10:46:07.970902] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:27.784 [2024-11-20 10:46:07.972182] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:28:27.784 [2024-11-20 10:46:07.972292] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:28:27.784 [2024-11-20 10:46:07.972293] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:28:27.784 [2024-11-20 10:46:08.038027] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:28:27.784 [2024-11-20 10:46:08.038796] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:28:27.784 [2024-11-20 10:46:08.039026] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:28:27.784 [2024-11-20 10:46:08.039176] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:28:27.784 10:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:28:27.784 10:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@868 -- # return 0 00:28:27.784 10:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:28:27.784 10:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@732 -- # xtrace_disable 00:28:27.784 10:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:28:27.784 10:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:27.784 10:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:28:27.784 10:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:28:27.784 [2024-11-20 10:46:08.273025] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:27.784 10:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:28:28.058 10:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:28.058 [2024-11-20 10:46:08.673836] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:28.058 10:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:28:28.333 10:46:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:28:28.592 Malloc0 00:28:28.592 10:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:28:28.592 Delay0 00:28:28.592 10:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:28.850 10:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:28:29.108 NULL1 00:28:29.108 10:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:28:29.108 10:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=3397646 00:28:29.108 10:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:28:29.108 10:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:29.108 10:46:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:30.481 Read completed with error (sct=0, sc=11) 00:28:30.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.481 10:46:10 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:30.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.481 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:30.481 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:28:30.481 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:28:30.739 true 00:28:30.739 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:30.739 10:46:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:31.670 10:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:31.927 10:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:28:31.927 10:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:28:31.927 true 00:28:31.927 10:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:31.927 10:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:32.184 10:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:32.441 10:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:28:32.441 10:46:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:28:32.699 true 00:28:32.699 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:32.699 10:46:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:33.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.631 10:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:33.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.631 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.889 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.889 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:33.889 10:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:28:33.889 10:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:28:34.146 true 00:28:34.146 10:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:34.146 10:46:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.079 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:35.079 10:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:35.079 10:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:28:35.079 10:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:28:35.336 true 00:28:35.336 10:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:35.336 10:46:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:35.336 10:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:35.593 10:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:28:35.593 10:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:28:35.850 true 00:28:35.850 10:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:35.850 10:46:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:37.221 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:37.221 10:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:37.221 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:37.221 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:37.221 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:37.221 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:37.221 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:37.221 10:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:28:37.221 10:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:28:37.479 true 00:28:37.479 10:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:37.479 10:46:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:38.043 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:38.301 10:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:38.301 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:38.301 10:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:28:38.301 10:46:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:28:38.559 true 00:28:38.559 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:38.559 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:38.817 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:39.074 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:28:39.074 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:28:39.074 true 00:28:39.074 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:39.074 10:46:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:40.444 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.445 10:46:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:40.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.445 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:40.445 10:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:28:40.445 10:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:28:40.703 true 00:28:40.703 10:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:40.703 10:46:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:41.636 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:41.636 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:28:41.636 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:28:41.894 true 00:28:41.894 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:41.894 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:42.152 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:42.410 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:28:42.410 10:46:22 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:28:42.410 true 00:28:42.668 10:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:42.668 10:46:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:43.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.601 10:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:43.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.601 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.859 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:43.859 10:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:28:43.859 10:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:28:44.117 true 00:28:44.117 10:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:44.117 10:46:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.049 10:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:45.049 10:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:28:45.049 10:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:28:45.306 true 00:28:45.306 10:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:45.306 10:46:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:45.306 10:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:45.565 10:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:28:45.565 10:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:28:45.822 true 00:28:45.823 10:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:45.823 10:46:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:47.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.197 10:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:47.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.197 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:47.197 10:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:28:47.197 10:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:28:47.454 true 00:28:47.454 10:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:47.454 10:46:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.388 10:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:48.388 10:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:28:48.388 10:46:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:28:48.644 true 00:28:48.644 10:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:48.644 10:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:48.902 10:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:48.902 10:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:28:48.902 10:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:28:49.160 true 00:28:49.160 10:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:49.160 10:46:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.092 10:46:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:50.349 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:28:50.349 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:28:50.607 true 00:28:50.607 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:50.607 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:50.865 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:51.123 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:28:51.123 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:28:51.123 true 00:28:51.123 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:51.123 10:46:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:52.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.530 10:46:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:52.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.530 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:52.530 10:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:28:52.530 10:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:28:52.787 true 00:28:52.787 10:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:52.787 10:46:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:53.716 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:53.716 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:28:53.716 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:28:53.974 true 00:28:53.974 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:53.974 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:53.974 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:54.231 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:28:54.231 10:46:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:28:54.489 true 00:28:54.489 10:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:54.489 10:46:35 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:55.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.422 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:55.422 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.680 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:55.680 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:28:55.680 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:28:55.936 true 00:28:55.936 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:55.936 10:46:36 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:56.869 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:56.869 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:56.869 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:56.869 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:28:56.869 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:28:57.127 true 00:28:57.127 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:57.127 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:57.383 10:46:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:57.641 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:28:57.641 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:28:57.641 true 00:28:57.641 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:57.641 10:46:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:28:59.014 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.014 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:28:59.014 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.014 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.014 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.014 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.014 Message suppressed 999 times: Read completed with error (sct=0, sc=11) 00:28:59.014 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:28:59.014 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:28:59.272 true 00:28:59.272 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:28:59.272 10:46:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.205 10:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:00.205 Initializing NVMe Controllers 00:29:00.205 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:00.205 Controller IO queue size 128, less than required. 00:29:00.205 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:00.205 Controller IO queue size 128, less than required. 00:29:00.205 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:00.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:29:00.205 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:29:00.205 Initialization complete. Launching workers. 00:29:00.205 ======================================================== 00:29:00.205 Latency(us) 00:29:00.205 Device Information : IOPS MiB/s Average min max 00:29:00.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 2210.57 1.08 42062.28 2442.24 1012657.60 00:29:00.205 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 18784.67 9.17 6813.67 1926.03 445908.39 00:29:00.205 ======================================================== 00:29:00.205 Total : 20995.23 10.25 10524.96 1926.03 1012657.60 00:29:00.205 00:29:00.205 10:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:29:00.205 10:46:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:29:00.463 true 00:29:00.463 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 3397646 00:29:00.463 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (3397646) - No such process 00:29:00.463 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 3397646 00:29:00.463 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:00.721 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:00.979 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:29:00.979 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:29:00.979 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:29:00.979 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:00.979 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:29:00.979 null0 00:29:00.979 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:00.979 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:00.979 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:29:01.238 null1 00:29:01.238 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:01.238 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:01.238 10:46:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:29:01.495 null2 00:29:01.495 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:01.495 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:01.495 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:29:01.495 null3 00:29:01.495 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:01.495 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:01.495 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:29:01.754 null4 00:29:01.754 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:01.754 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:01.754 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:29:02.013 null5 00:29:02.013 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:02.013 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:02.013 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:29:02.272 null6 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:29:02.272 null7 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:29:02.272 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:02.273 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:29:02.273 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:29:02.273 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:29:02.273 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:02.273 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:29:02.273 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 3402979 3402980 3402981 3402983 3402984 3402986 3402988 3402991 00:29:02.273 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.273 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:29:02.273 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:02.273 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:29:02.273 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:29:02.273 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.273 10:46:42 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:02.531 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:02.531 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:02.531 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:02.531 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:02.531 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:02.531 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:02.531 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:02.531 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.790 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.791 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:02.791 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:02.791 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:02.791 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.049 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:03.306 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.307 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.307 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:03.307 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.307 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.307 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.307 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:03.307 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.307 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:03.307 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:03.307 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:03.307 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:03.307 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:03.307 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:03.307 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:03.307 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:03.307 10:46:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:03.564 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:03.821 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:03.821 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:03.821 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:03.821 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:03.821 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:03.821 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:03.821 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:03.821 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.079 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:04.337 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.337 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:04.337 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:04.337 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:04.337 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:04.337 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:04.337 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:04.337 10:46:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.337 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:04.596 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:04.596 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:04.596 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:04.596 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:04.596 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:04.596 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:04.596 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:04.596 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.854 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.855 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:04.855 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:04.855 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:04.855 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:05.113 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:05.113 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:05.113 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:05.113 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.113 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:05.113 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:05.113 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:05.113 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.371 10:46:45 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:05.371 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.371 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:05.371 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:05.371 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:05.371 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:05.371 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:05.630 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:05.888 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:05.888 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:05.888 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:05.888 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:05.888 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:05.888 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:05.888 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:05.888 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.147 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:29:06.405 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:29:06.405 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:29:06.405 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:29:06.405 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:29:06.405 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:29:06.405 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:29:06.405 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:29:06.405 10:46:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:29:06.405 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.405 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.405 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.405 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.405 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.405 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.405 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.405 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@99 -- # sync 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@102 -- # set +e 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:06.664 rmmod nvme_tcp 00:29:06.664 rmmod nvme_fabrics 00:29:06.664 rmmod nvme_keyring 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@106 -- # set -e 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@107 -- # return 0 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@336 -- # '[' -n 3397166 ']' 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@337 -- # killprocess 3397166 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # '[' -z 3397166 ']' 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # kill -0 3397166 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # uname 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3397166 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3397166' 00:29:06.664 killing process with pid 3397166 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@973 -- # kill 3397166 00:29:06.664 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@978 -- # wait 3397166 00:29:06.922 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:06.922 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # nvmf_fini 00:29:06.922 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@264 -- # local dev 00:29:06.922 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@267 -- # remove_target_ns 00:29:06.922 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:06.922 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:06.922 10:46:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@268 -- # delete_main_bridge 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@130 -- # return 0 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # _dev=0 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@41 -- # dev_map=() 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/setup.sh@284 -- # iptr 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # iptables-save 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- nvmf/common.sh@542 -- # iptables-restore 00:29:08.829 00:29:08.829 real 0m47.971s 00:29:08.829 user 2m58.695s 00:29:08.829 sys 0m20.142s 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:29:08.829 ************************************ 00:29:08.829 END TEST nvmf_ns_hotplug_stress 00:29:08.829 ************************************ 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@18 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:08.829 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:09.088 ************************************ 00:29:09.088 START TEST nvmf_delete_subsystem 00:29:09.088 ************************************ 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp --interrupt-mode 00:29:09.088 * Looking for test storage... 00:29:09.088 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lcov --version 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # IFS=.-: 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@336 -- # read -ra ver1 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # IFS=.-: 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@337 -- # read -ra ver2 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@338 -- # local 'op=<' 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@340 -- # ver1_l=2 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@341 -- # ver2_l=1 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@344 -- # case "$op" in 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@345 -- # : 1 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # decimal 1 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=1 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 1 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@365 -- # ver1[v]=1 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # decimal 2 00:29:09.088 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@353 -- # local d=2 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@355 -- # echo 2 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@366 -- # ver2[v]=2 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@368 -- # return 0 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:09.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.089 --rc genhtml_branch_coverage=1 00:29:09.089 --rc genhtml_function_coverage=1 00:29:09.089 --rc genhtml_legend=1 00:29:09.089 --rc geninfo_all_blocks=1 00:29:09.089 --rc geninfo_unexecuted_blocks=1 00:29:09.089 00:29:09.089 ' 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:09.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.089 --rc genhtml_branch_coverage=1 00:29:09.089 --rc genhtml_function_coverage=1 00:29:09.089 --rc genhtml_legend=1 00:29:09.089 --rc geninfo_all_blocks=1 00:29:09.089 --rc geninfo_unexecuted_blocks=1 00:29:09.089 00:29:09.089 ' 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:09.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.089 --rc genhtml_branch_coverage=1 00:29:09.089 --rc genhtml_function_coverage=1 00:29:09.089 --rc genhtml_legend=1 00:29:09.089 --rc geninfo_all_blocks=1 00:29:09.089 --rc geninfo_unexecuted_blocks=1 00:29:09.089 00:29:09.089 ' 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:09.089 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:09.089 --rc genhtml_branch_coverage=1 00:29:09.089 --rc genhtml_function_coverage=1 00:29:09.089 --rc genhtml_legend=1 00:29:09.089 --rc geninfo_all_blocks=1 00:29:09.089 --rc geninfo_unexecuted_blocks=1 00:29:09.089 00:29:09.089 ' 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@15 -- # shopt -s extglob 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@50 -- # : 0 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # remove_target_ns 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # xtrace_disable 00:29:09.089 10:46:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:15.781 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:15.781 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # pci_devs=() 00:29:15.781 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:15.781 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:15.781 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:15.781 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:15.781 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:15.781 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # net_devs=() 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # e810=() 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@136 -- # local -ga e810 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # x722=() 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@137 -- # local -ga x722 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # mlx=() 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@138 -- # local -ga mlx 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:15.782 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:15.782 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:15.782 Found net devices under 0000:86:00.0: cvl_0_0 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:15.782 Found net devices under 0000:86:00.1: cvl_0_1 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@262 -- # is_hw=yes 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@257 -- # create_target_ns 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@28 -- # local -g _dev 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # ips=() 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:29:15.782 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772161 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:15.783 10.0.0.1 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@11 -- # local val=167772162 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:15.783 10.0.0.2 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:15.783 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:15.783 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.467 ms 00:29:15.783 00:29:15.783 --- 10.0.0.1 ping statistics --- 00:29:15.783 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.783 rtt min/avg/max/mdev = 0.467/0.467/0.467/0.000 ms 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:29:15.783 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:29:15.784 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:15.784 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.224 ms 00:29:15.784 00:29:15.784 --- 10.0.0.2 ping statistics --- 00:29:15.784 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:15.784 rtt min/avg/max/mdev = 0.224/0.224/0.224/0.000 ms 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair++ )) 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # return 0 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator0 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=initiator1 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # return 1 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev= 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@169 -- # return 0 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target0 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target0 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # get_net_dev target1 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@107 -- # local dev=target1 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@109 -- # return 1 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@168 -- # dev= 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@169 -- # return 0 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:15.784 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@328 -- # nvmfpid=3407373 00:29:15.785 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:29:15.785 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # waitforlisten 3407373 00:29:15.785 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@835 -- # '[' -z 3407373 ']' 00:29:15.785 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:15.785 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:15.785 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:15.785 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:15.785 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:15.785 10:46:55 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:15.785 [2024-11-20 10:46:55.871661] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:15.785 [2024-11-20 10:46:55.872566] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:29:15.785 [2024-11-20 10:46:55.872597] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:15.785 [2024-11-20 10:46:55.951756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:29:15.785 [2024-11-20 10:46:55.992100] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:15.785 [2024-11-20 10:46:55.992134] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:15.785 [2024-11-20 10:46:55.992141] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:15.785 [2024-11-20 10:46:55.992147] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:15.785 [2024-11-20 10:46:55.992152] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:15.785 [2024-11-20 10:46:55.993343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:15.785 [2024-11-20 10:46:55.993343] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:15.785 [2024-11-20 10:46:56.058464] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:15.785 [2024-11-20 10:46:56.059030] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:15.785 [2024-11-20 10:46:56.059272] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@868 -- # return 0 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:15.785 [2024-11-20 10:46:56.126014] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:15.785 [2024-11-20 10:46:56.154341] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:15.785 NULL1 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:15.785 Delay0 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=3407399 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:29:15.785 10:46:56 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:15.785 [2024-11-20 10:46:56.267691] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:17.679 10:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:17.679 10:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:17.679 10:46:58 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 starting I/O failed: -6 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 starting I/O failed: -6 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 starting I/O failed: -6 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 starting I/O failed: -6 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 starting I/O failed: -6 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 starting I/O failed: -6 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 starting I/O failed: -6 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 starting I/O failed: -6 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 starting I/O failed: -6 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 starting I/O failed: -6 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 starting I/O failed: -6 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 starting I/O failed: -6 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 [2024-11-20 10:46:58.383162] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c14a0 is same with the state(6) to be set 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Read completed with error (sct=0, sc=8) 00:29:17.679 Write completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 Write completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 Read completed with error (sct=0, sc=8) 00:29:17.680 starting I/O failed: -6 00:29:17.680 [2024-11-20 10:46:58.387431] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c14000c40 is same with the state(6) to be set 00:29:17.680 starting I/O failed: -6 00:29:17.680 starting I/O failed: -6 00:29:17.680 starting I/O failed: -6 00:29:17.680 starting I/O failed: -6 00:29:17.680 starting I/O failed: -6 00:29:19.051 [2024-11-20 10:46:59.363303] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c29a0 is same with the state(6) to be set 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 [2024-11-20 10:46:59.386642] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c12c0 is same with the state(6) to be set 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 [2024-11-20 10:46:59.386810] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19c1680 is same with the state(6) to be set 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 [2024-11-20 10:46:59.388619] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c1400d020 is same with the state(6) to be set 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Write completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.051 Read completed with error (sct=0, sc=8) 00:29:19.052 Read completed with error (sct=0, sc=8) 00:29:19.052 Read completed with error (sct=0, sc=8) 00:29:19.052 Write completed with error (sct=0, sc=8) 00:29:19.052 Write completed with error (sct=0, sc=8) 00:29:19.052 [2024-11-20 10:46:59.389135] nvme_tcp.c: 326:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f7c1400d7e0 is same with the state(6) to be set 00:29:19.052 Initializing NVMe Controllers 00:29:19.052 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:19.052 Controller IO queue size 128, less than required. 00:29:19.052 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:19.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:19.052 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:19.052 Initialization complete. Launching workers. 00:29:19.052 ======================================================== 00:29:19.052 Latency(us) 00:29:19.052 Device Information : IOPS MiB/s Average min max 00:29:19.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 177.17 0.09 879492.68 317.17 1006206.10 00:29:19.052 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 179.66 0.09 917871.00 300.39 1009649.59 00:29:19.052 ======================================================== 00:29:19.052 Total : 356.84 0.17 898815.65 300.39 1009649.59 00:29:19.052 00:29:19.052 [2024-11-20 10:46:59.389660] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19c29a0 (9): Bad file descriptor 00:29:19.052 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:29:19.052 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.052 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:29:19.052 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3407399 00:29:19.052 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 3407399 00:29:19.310 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (3407399) - No such process 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 3407399 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@652 -- # local es=0 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@654 -- # valid_exec_arg wait 3407399 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # local arg=wait 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # type -t wait 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # wait 3407399 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@655 -- # es=1 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:19.310 [2024-11-20 10:46:59.914110] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=3408078 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3408078 00:29:19.310 10:46:59 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:19.310 [2024-11-20 10:46:59.981715] subsystem.c:1637:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:29:19.874 10:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:19.874 10:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3408078 00:29:19.874 10:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:20.438 10:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:20.438 10:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3408078 00:29:20.438 10:47:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:21.002 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:21.002 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3408078 00:29:21.002 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:21.259 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:21.259 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3408078 00:29:21.259 10:47:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:21.821 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:21.821 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3408078 00:29:21.821 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:22.384 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:22.384 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3408078 00:29:22.384 10:47:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:29:22.641 Initializing NVMe Controllers 00:29:22.641 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:29:22.641 Controller IO queue size 128, less than required. 00:29:22.641 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:22.641 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:29:22.641 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:29:22.641 Initialization complete. Launching workers. 00:29:22.641 ======================================================== 00:29:22.641 Latency(us) 00:29:22.641 Device Information : IOPS MiB/s Average min max 00:29:22.641 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004521.44 1000149.57 1041908.60 00:29:22.641 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005142.25 1000242.23 1042137.45 00:29:22.641 ======================================================== 00:29:22.641 Total : 256.00 0.12 1004831.84 1000149.57 1042137.45 00:29:22.641 00:29:22.899 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:29:22.899 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 3408078 00:29:22.899 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (3408078) - No such process 00:29:22.899 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 3408078 00:29:22.899 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:29:22.899 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:29:22.899 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:22.899 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@99 -- # sync 00:29:22.899 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:22.899 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@102 -- # set +e 00:29:22.899 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:22.899 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:22.899 rmmod nvme_tcp 00:29:22.899 rmmod nvme_fabrics 00:29:22.899 rmmod nvme_keyring 00:29:22.899 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:22.899 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@106 -- # set -e 00:29:22.899 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@107 -- # return 0 00:29:22.899 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@336 -- # '[' -n 3407373 ']' 00:29:22.900 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@337 -- # killprocess 3407373 00:29:22.900 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # '[' -z 3407373 ']' 00:29:22.900 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # kill -0 3407373 00:29:22.900 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # uname 00:29:22.900 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:22.900 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3407373 00:29:22.900 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:22.900 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:22.900 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3407373' 00:29:22.900 killing process with pid 3407373 00:29:22.900 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@973 -- # kill 3407373 00:29:22.900 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@978 -- # wait 3407373 00:29:23.159 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:23.159 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # nvmf_fini 00:29:23.159 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@264 -- # local dev 00:29:23.159 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@267 -- # remove_target_ns 00:29:23.159 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:23.159 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:23.159 10:47:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@268 -- # delete_main_bridge 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@130 -- # return 0 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # _dev=0 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@41 -- # dev_map=() 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/setup.sh@284 -- # iptr 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # iptables-save 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- nvmf/common.sh@542 -- # iptables-restore 00:29:25.695 00:29:25.695 real 0m16.249s 00:29:25.695 user 0m26.201s 00:29:25.695 sys 0m6.115s 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:29:25.695 ************************************ 00:29:25.695 END TEST nvmf_delete_subsystem 00:29:25.695 ************************************ 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:25.695 ************************************ 00:29:25.695 START TEST nvmf_host_management 00:29:25.695 ************************************ 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp --interrupt-mode 00:29:25.695 * Looking for test storage... 00:29:25.695 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lcov --version 00:29:25.695 10:47:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # IFS=.-: 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@336 -- # read -ra ver1 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # IFS=.-: 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@337 -- # read -ra ver2 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@338 -- # local 'op=<' 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@340 -- # ver1_l=2 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@341 -- # ver2_l=1 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@344 -- # case "$op" in 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@345 -- # : 1 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # decimal 1 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=1 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 1 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@365 -- # ver1[v]=1 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # decimal 2 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@353 -- # local d=2 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:25.695 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@355 -- # echo 2 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@366 -- # ver2[v]=2 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@368 -- # return 0 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:25.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.696 --rc genhtml_branch_coverage=1 00:29:25.696 --rc genhtml_function_coverage=1 00:29:25.696 --rc genhtml_legend=1 00:29:25.696 --rc geninfo_all_blocks=1 00:29:25.696 --rc geninfo_unexecuted_blocks=1 00:29:25.696 00:29:25.696 ' 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:25.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.696 --rc genhtml_branch_coverage=1 00:29:25.696 --rc genhtml_function_coverage=1 00:29:25.696 --rc genhtml_legend=1 00:29:25.696 --rc geninfo_all_blocks=1 00:29:25.696 --rc geninfo_unexecuted_blocks=1 00:29:25.696 00:29:25.696 ' 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:25.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.696 --rc genhtml_branch_coverage=1 00:29:25.696 --rc genhtml_function_coverage=1 00:29:25.696 --rc genhtml_legend=1 00:29:25.696 --rc geninfo_all_blocks=1 00:29:25.696 --rc geninfo_unexecuted_blocks=1 00:29:25.696 00:29:25.696 ' 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:25.696 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:25.696 --rc genhtml_branch_coverage=1 00:29:25.696 --rc genhtml_function_coverage=1 00:29:25.696 --rc genhtml_legend=1 00:29:25.696 --rc geninfo_all_blocks=1 00:29:25.696 --rc geninfo_unexecuted_blocks=1 00:29:25.696 00:29:25.696 ' 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@15 -- # shopt -s extglob 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@50 -- # : 0 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@260 -- # remove_target_ns 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@125 -- # xtrace_disable 00:29:25.696 10:47:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@131 -- # pci_devs=() 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@135 -- # net_devs=() 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@136 -- # e810=() 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@136 -- # local -ga e810 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@137 -- # x722=() 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@137 -- # local -ga x722 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@138 -- # mlx=() 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@138 -- # local -ga mlx 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:32.263 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:32.263 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.263 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:32.263 Found net devices under 0000:86:00.0: cvl_0_0 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:32.264 Found net devices under 0000:86:00.1: cvl_0_1 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@262 -- # is_hw=yes 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@257 -- # create_target_ns 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@28 -- # local -g _dev 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # ips=() 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772161 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:32.264 10.0.0.1 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@11 -- # local val=167772162 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:32.264 10.0.0.2 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:29:32.264 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:32.265 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:32.265 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:29:32.265 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:29:32.265 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:32.265 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:32.265 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:32.265 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:29:32.265 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:29:32.265 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:29:32.265 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:29:32.265 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:29:32.265 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:29:32.265 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:29:32.265 10:47:11 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator0 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:32.265 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:32.265 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.469 ms 00:29:32.265 00:29:32.265 --- 10.0.0.1 ping statistics --- 00:29:32.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.265 rtt min/avg/max/mdev = 0.469/0.469/0.469/0.000 ms 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:29:32.265 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:32.265 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:29:32.265 00:29:32.265 --- 10.0.0.2 ping statistics --- 00:29:32.265 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:32.265 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair++ )) 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@270 -- # return 0 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator0 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=initiator1 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # return 1 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev= 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@169 -- # return 0 00:29:32.265 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target0 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target0 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # get_net_dev target1 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@107 -- # local dev=target1 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@109 -- # return 1 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@168 -- # dev= 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@169 -- # return 0 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@328 -- # nvmfpid=3412095 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@329 -- # waitforlisten 3412095 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1E 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3412095 ']' 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:32.266 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.266 10:47:12 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:32.266 [2024-11-20 10:47:12.210125] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:32.266 [2024-11-20 10:47:12.211110] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:29:32.266 [2024-11-20 10:47:12.211152] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:32.266 [2024-11-20 10:47:12.290662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:32.266 [2024-11-20 10:47:12.335666] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:32.266 [2024-11-20 10:47:12.335700] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:32.266 [2024-11-20 10:47:12.335707] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:32.266 [2024-11-20 10:47:12.335713] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:32.266 [2024-11-20 10:47:12.335719] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:32.266 [2024-11-20 10:47:12.337123] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:32.266 [2024-11-20 10:47:12.337242] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:29:32.266 [2024-11-20 10:47:12.337267] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:32.266 [2024-11-20 10:47:12.337268] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:29:32.266 [2024-11-20 10:47:12.404347] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:32.266 [2024-11-20 10:47:12.405051] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:32.266 [2024-11-20 10:47:12.405321] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:29:32.266 [2024-11-20 10:47:12.405675] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:32.266 [2024-11-20 10:47:12.405717] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:32.525 [2024-11-20 10:47:13.086101] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:32.525 Malloc0 00:29:32.525 [2024-11-20 10:47:13.174286] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=3412365 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 3412365 /var/tmp/bdevperf.sock 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@835 -- # '[' -z 3412365 ']' 00:29:32.525 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:29:32.526 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:29:32.526 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:29:32.526 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:32.526 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:29:32.526 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:29:32.526 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:29:32.526 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:32.526 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:29:32.526 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:32.526 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:29:32.526 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:29:32.526 { 00:29:32.526 "params": { 00:29:32.526 "name": "Nvme$subsystem", 00:29:32.526 "trtype": "$TEST_TRANSPORT", 00:29:32.526 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:32.526 "adrfam": "ipv4", 00:29:32.526 "trsvcid": "$NVMF_PORT", 00:29:32.526 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:32.526 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:32.526 "hdgst": ${hdgst:-false}, 00:29:32.526 "ddgst": ${ddgst:-false} 00:29:32.526 }, 00:29:32.526 "method": "bdev_nvme_attach_controller" 00:29:32.526 } 00:29:32.526 EOF 00:29:32.526 )") 00:29:32.526 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:29:32.526 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:29:32.526 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:29:32.526 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:29:32.526 "params": { 00:29:32.526 "name": "Nvme0", 00:29:32.526 "trtype": "tcp", 00:29:32.526 "traddr": "10.0.0.2", 00:29:32.526 "adrfam": "ipv4", 00:29:32.526 "trsvcid": "4420", 00:29:32.526 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:32.526 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:32.526 "hdgst": false, 00:29:32.526 "ddgst": false 00:29:32.526 }, 00:29:32.526 "method": "bdev_nvme_attach_controller" 00:29:32.526 }' 00:29:32.783 [2024-11-20 10:47:13.269134] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:29:32.783 [2024-11-20 10:47:13.269181] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412365 ] 00:29:32.783 [2024-11-20 10:47:13.345724] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:32.783 [2024-11-20 10:47:13.386848] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:33.041 Running I/O for 10 seconds... 00:29:33.041 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:33.041 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@868 -- # return 0 00:29:33.042 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:29:33.042 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.042 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:33.042 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.042 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:29:33.042 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:29:33.042 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:29:33.042 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:29:33.042 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:29:33.042 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:29:33.042 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:29:33.042 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:29:33.042 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:29:33.042 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:29:33.042 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.042 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:33.301 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.301 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=105 00:29:33.301 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@58 -- # '[' 105 -ge 100 ']' 00:29:33.301 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:29:33.301 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@60 -- # break 00:29:33.301 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:29:33.301 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:33.301 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.301 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:33.301 [2024-11-20 10:47:13.809954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.301 [2024-11-20 10:47:13.809991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.301 [2024-11-20 10:47:13.810006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.301 [2024-11-20 10:47:13.810014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.301 [2024-11-20 10:47:13.810023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.301 [2024-11-20 10:47:13.810030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.301 [2024-11-20 10:47:13.810039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.301 [2024-11-20 10:47:13.810046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.301 [2024-11-20 10:47:13.810054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.301 [2024-11-20 10:47:13.810060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.301 [2024-11-20 10:47:13.810068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.301 [2024-11-20 10:47:13.810075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.301 [2024-11-20 10:47:13.810083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.301 [2024-11-20 10:47:13.810090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810109] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810181] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810252] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.302 [2024-11-20 10:47:13.810672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.302 [2024-11-20 10:47:13.810679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.810687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.303 [2024-11-20 10:47:13.810693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.810701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.303 [2024-11-20 10:47:13.810708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.810716] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.303 [2024-11-20 10:47:13.810722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.810730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.303 [2024-11-20 10:47:13.810737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.810745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.303 [2024-11-20 10:47:13.810752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.810761] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.303 [2024-11-20 10:47:13.810767] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.810775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.303 [2024-11-20 10:47:13.810781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.810790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.303 [2024-11-20 10:47:13.810796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.810804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.303 [2024-11-20 10:47:13.810810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.810818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.303 [2024-11-20 10:47:13.810825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.810832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.303 [2024-11-20 10:47:13.810839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.810848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.303 [2024-11-20 10:47:13.810854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.810863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.303 [2024-11-20 10:47:13.810870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.810878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.303 [2024-11-20 10:47:13.810885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.810893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.303 [2024-11-20 10:47:13.810899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.810907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.303 [2024-11-20 10:47:13.810914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.810921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.303 [2024-11-20 10:47:13.810928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.810936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:29:33.303 [2024-11-20 10:47:13.810942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.810969] nvme_qpair.c: 812:spdk_nvme_qpair_process_completions: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] CQ transport error -6 (No such device or address) on qpair id 1 00:29:33.303 [2024-11-20 10:47:13.811885] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0, 1] resetting controller 00:29:33.303 task offset: 24960 on job bdev=Nvme0n1 fails 00:29:33.303 00:29:33.303 Latency(us) 00:29:33.303 [2024-11-20T09:47:14.034Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:33.303 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:33.303 Job: Nvme0n1 ended in about 0.11 seconds with error 00:29:33.303 Verification LBA range: start 0x0 length 0x400 00:29:33.303 Nvme0n1 : 0.11 1766.91 110.43 588.97 0.00 25037.03 1349.73 26588.89 00:29:33.303 [2024-11-20T09:47:14.034Z] =================================================================================================================== 00:29:33.303 [2024-11-20T09:47:14.034Z] Total : 1766.91 110.43 588.97 0.00 25037.03 1349.73 26588.89 00:29:33.303 [2024-11-20 10:47:13.814250] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:29:33.303 [2024-11-20 10:47:13.814270] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ba500 (9): Bad file descriptor 00:29:33.303 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.303 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:29:33.303 [2024-11-20 10:47:13.815270] ctrlr.c: 823:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode0' does not allow host 'nqn.2016-06.io.spdk:host0' 00:29:33.303 [2024-11-20 10:47:13.815344] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:3 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:29:33.303 [2024-11-20 10:47:13.815365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND SPECIFIC (01/84) qid:0 cid:3 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:29:33.303 [2024-11-20 10:47:13.815377] nvme_fabric.c: 599:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode0 00:29:33.303 [2024-11-20 10:47:13.815384] nvme_fabric.c: 610:nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 132 00:29:33.303 [2024-11-20 10:47:13.815392] nvme_tcp.c:2348:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:29:33.303 [2024-11-20 10:47:13.815399] nvme_tcp.c:2125:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x11ba500 00:29:33.303 [2024-11-20 10:47:13.815418] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x11ba500 (9): Bad file descriptor 00:29:33.303 [2024-11-20 10:47:13.815429] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Ctrlr is in error state 00:29:33.303 [2024-11-20 10:47:13.815436] nvme_ctrlr.c:1826:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] controller reinitialization failed 00:29:33.303 [2024-11-20 10:47:13.815445] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] in failed state. 00:29:33.303 [2024-11-20 10:47:13.815454] bdev_nvme.c:2280:bdev_nvme_reset_ctrlr_complete: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 1] Resetting controller failed. 00:29:33.303 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@563 -- # xtrace_disable 00:29:33.303 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:33.303 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:29:33.303 10:47:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:29:34.236 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 3412365 00:29:34.236 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (3412365) - No such process 00:29:34.236 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@91 -- # true 00:29:34.236 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:29:34.236 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:29:34.236 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:29:34.236 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # config=() 00:29:34.236 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@372 -- # local subsystem config 00:29:34.236 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:29:34.236 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:29:34.236 { 00:29:34.237 "params": { 00:29:34.237 "name": "Nvme$subsystem", 00:29:34.237 "trtype": "$TEST_TRANSPORT", 00:29:34.237 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:34.237 "adrfam": "ipv4", 00:29:34.237 "trsvcid": "$NVMF_PORT", 00:29:34.237 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:34.237 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:34.237 "hdgst": ${hdgst:-false}, 00:29:34.237 "ddgst": ${ddgst:-false} 00:29:34.237 }, 00:29:34.237 "method": "bdev_nvme_attach_controller" 00:29:34.237 } 00:29:34.237 EOF 00:29:34.237 )") 00:29:34.237 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@394 -- # cat 00:29:34.237 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@396 -- # jq . 00:29:34.237 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@397 -- # IFS=, 00:29:34.237 10:47:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:29:34.237 "params": { 00:29:34.237 "name": "Nvme0", 00:29:34.237 "trtype": "tcp", 00:29:34.237 "traddr": "10.0.0.2", 00:29:34.237 "adrfam": "ipv4", 00:29:34.237 "trsvcid": "4420", 00:29:34.237 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:34.237 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:34.237 "hdgst": false, 00:29:34.237 "ddgst": false 00:29:34.237 }, 00:29:34.237 "method": "bdev_nvme_attach_controller" 00:29:34.237 }' 00:29:34.237 [2024-11-20 10:47:14.882522] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:29:34.237 [2024-11-20 10:47:14.882574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3412607 ] 00:29:34.237 [2024-11-20 10:47:14.957714] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:34.494 [2024-11-20 10:47:14.996647] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.751 Running I/O for 1 seconds... 00:29:35.683 2120.00 IOPS, 132.50 MiB/s 00:29:35.683 Latency(us) 00:29:35.683 [2024-11-20T09:47:16.414Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:29:35.683 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:29:35.683 Verification LBA range: start 0x0 length 0x400 00:29:35.683 Nvme0n1 : 1.05 2071.79 129.49 0.00 0.00 29292.97 2590.23 43191.34 00:29:35.683 [2024-11-20T09:47:16.414Z] =================================================================================================================== 00:29:35.683 [2024-11-20T09:47:16.414Z] Total : 2071.79 129.49 0.00 0.00 29292.97 2590.23 43191.34 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@99 -- # sync 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@102 -- # set +e 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:35.941 rmmod nvme_tcp 00:29:35.941 rmmod nvme_fabrics 00:29:35.941 rmmod nvme_keyring 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@106 -- # set -e 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@107 -- # return 0 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@336 -- # '[' -n 3412095 ']' 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@337 -- # killprocess 3412095 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@954 -- # '[' -z 3412095 ']' 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@958 -- # kill -0 3412095 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # uname 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3412095 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3412095' 00:29:35.941 killing process with pid 3412095 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@973 -- # kill 3412095 00:29:35.941 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@978 -- # wait 3412095 00:29:36.199 [2024-11-20 10:47:16.817192] app.c: 721:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:29:36.199 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:36.199 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@342 -- # nvmf_fini 00:29:36.199 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@264 -- # local dev 00:29:36.199 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@267 -- # remove_target_ns 00:29:36.199 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:36.199 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:36.199 10:47:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@268 -- # delete_main_bridge 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@130 -- # return 0 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # _dev=0 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@41 -- # dev_map=() 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/setup.sh@284 -- # iptr 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@542 -- # iptables-save 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- nvmf/common.sh@542 -- # iptables-restore 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:29:38.725 00:29:38.725 real 0m13.032s 00:29:38.725 user 0m17.755s 00:29:38.725 sys 0m6.362s 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:29:38.725 ************************************ 00:29:38.725 END TEST nvmf_host_management 00:29:38.725 ************************************ 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:38.725 10:47:18 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:29:38.725 ************************************ 00:29:38.725 START TEST nvmf_lvol 00:29:38.725 ************************************ 00:29:38.725 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp --interrupt-mode 00:29:38.725 * Looking for test storage... 00:29:38.725 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:38.725 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:29:38.725 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lcov --version 00:29:38.725 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:29:38.725 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:29:38.725 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:38.725 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:38.725 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:38.725 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # IFS=.-: 00:29:38.725 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@336 -- # read -ra ver1 00:29:38.725 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # IFS=.-: 00:29:38.725 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@337 -- # read -ra ver2 00:29:38.725 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@338 -- # local 'op=<' 00:29:38.725 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@340 -- # ver1_l=2 00:29:38.725 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@341 -- # ver2_l=1 00:29:38.725 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:38.725 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@344 -- # case "$op" in 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@345 -- # : 1 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # decimal 1 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=1 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 1 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@365 -- # ver1[v]=1 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # decimal 2 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@353 -- # local d=2 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@355 -- # echo 2 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@366 -- # ver2[v]=2 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@368 -- # return 0 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:29:38.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.726 --rc genhtml_branch_coverage=1 00:29:38.726 --rc genhtml_function_coverage=1 00:29:38.726 --rc genhtml_legend=1 00:29:38.726 --rc geninfo_all_blocks=1 00:29:38.726 --rc geninfo_unexecuted_blocks=1 00:29:38.726 00:29:38.726 ' 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:29:38.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.726 --rc genhtml_branch_coverage=1 00:29:38.726 --rc genhtml_function_coverage=1 00:29:38.726 --rc genhtml_legend=1 00:29:38.726 --rc geninfo_all_blocks=1 00:29:38.726 --rc geninfo_unexecuted_blocks=1 00:29:38.726 00:29:38.726 ' 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:29:38.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.726 --rc genhtml_branch_coverage=1 00:29:38.726 --rc genhtml_function_coverage=1 00:29:38.726 --rc genhtml_legend=1 00:29:38.726 --rc geninfo_all_blocks=1 00:29:38.726 --rc geninfo_unexecuted_blocks=1 00:29:38.726 00:29:38.726 ' 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:29:38.726 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:38.726 --rc genhtml_branch_coverage=1 00:29:38.726 --rc genhtml_function_coverage=1 00:29:38.726 --rc genhtml_legend=1 00:29:38.726 --rc geninfo_all_blocks=1 00:29:38.726 --rc geninfo_unexecuted_blocks=1 00:29:38.726 00:29:38.726 ' 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@15 -- # shopt -s extglob 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@50 -- # : 0 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@54 -- # have_pci_nics=0 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@296 -- # prepare_net_devs 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@258 -- # local -g is_hw=no 00:29:38.726 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@260 -- # remove_target_ns 00:29:38.727 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:38.727 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:38.727 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:29:38.727 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:29:38.727 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:29:38.727 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@125 -- # xtrace_disable 00:29:38.727 10:47:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@131 -- # pci_devs=() 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@131 -- # local -a pci_devs 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@132 -- # pci_net_devs=() 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@133 -- # pci_drivers=() 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@133 -- # local -A pci_drivers 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@135 -- # net_devs=() 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@135 -- # local -ga net_devs 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@136 -- # e810=() 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@136 -- # local -ga e810 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@137 -- # x722=() 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@137 -- # local -ga x722 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@138 -- # mlx=() 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@138 -- # local -ga mlx 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:45.287 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:45.287 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:45.287 Found net devices under 0000:86:00.0: cvl_0_0 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@234 -- # [[ up == up ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:45.287 Found net devices under 0000:86:00.1: cvl_0_1 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@262 -- # is_hw=yes 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@257 -- # create_target_ns 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@27 -- # local -gA dev_map 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@28 -- # local -g _dev 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # ips=() 00:29:45.287 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772161 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:29:45.288 10.0.0.1 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@11 -- # local val=167772162 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:29:45.288 10.0.0.2 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:29:45.288 10:47:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@38 -- # ping_ips 1 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator0 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:29:45.288 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:45.288 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.381 ms 00:29:45.288 00:29:45.288 --- 10.0.0.1 ping statistics --- 00:29:45.288 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.288 rtt min/avg/max/mdev = 0.381/0.381/0.381/0.000 ms 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:29:45.288 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:29:45.289 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:45.289 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.210 ms 00:29:45.289 00:29:45.289 --- 10.0.0.2 ping statistics --- 00:29:45.289 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:45.289 rtt min/avg/max/mdev = 0.210/0.210/0.210/0.000 ms 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair++ )) 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@270 -- # return 0 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator0 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=initiator1 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # return 1 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev= 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@169 -- # return 0 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target0 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target0 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # get_net_dev target1 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@107 -- # local dev=target1 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@109 -- # return 1 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@168 -- # dev= 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@169 -- # return 0 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@726 -- # xtrace_disable 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@328 -- # nvmfpid=3416390 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@329 -- # waitforlisten 3416390 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x7 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@835 -- # '[' -z 3416390 ']' 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:45.289 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:45.290 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:45.290 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:45.290 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:45.290 [2024-11-20 10:47:25.271259] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:29:45.290 [2024-11-20 10:47:25.272135] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:29:45.290 [2024-11-20 10:47:25.272169] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:45.290 [2024-11-20 10:47:25.348671] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:29:45.290 [2024-11-20 10:47:25.388067] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:45.290 [2024-11-20 10:47:25.388103] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:45.290 [2024-11-20 10:47:25.388109] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:45.290 [2024-11-20 10:47:25.388115] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:45.290 [2024-11-20 10:47:25.388120] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:45.290 [2024-11-20 10:47:25.389526] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:29:45.290 [2024-11-20 10:47:25.389636] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:29:45.290 [2024-11-20 10:47:25.389635] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:29:45.290 [2024-11-20 10:47:25.455935] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:29:45.290 [2024-11-20 10:47:25.456739] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:29:45.290 [2024-11-20 10:47:25.456923] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:29:45.290 [2024-11-20 10:47:25.457073] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:29:45.290 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:45.290 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@868 -- # return 0 00:29:45.290 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:29:45.290 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@732 -- # xtrace_disable 00:29:45.290 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:29:45.290 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:45.290 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:29:45.290 [2024-11-20 10:47:25.690521] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:45.290 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:45.290 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:29:45.290 10:47:25 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:29:45.549 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:29:45.549 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:29:45.806 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:29:46.064 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=9202f50b-2444-4b78-afa4-2449635259b6 00:29:46.064 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 9202f50b-2444-4b78-afa4-2449635259b6 lvol 20 00:29:46.064 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=808320e6-07ac-4c12-bdf3-6aa41aecff4b 00:29:46.064 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:29:46.321 10:47:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 808320e6-07ac-4c12-bdf3-6aa41aecff4b 00:29:46.578 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:46.835 [2024-11-20 10:47:27.322355] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:46.835 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:29:46.835 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=3416871 00:29:46.835 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:29:46.835 10:47:27 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:29:48.209 10:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot 808320e6-07ac-4c12-bdf3-6aa41aecff4b MY_SNAPSHOT 00:29:48.209 10:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=9692d361-3da6-41f6-9b60-a17da8a6c987 00:29:48.209 10:47:28 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize 808320e6-07ac-4c12-bdf3-6aa41aecff4b 30 00:29:48.467 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone 9692d361-3da6-41f6-9b60-a17da8a6c987 MY_CLONE 00:29:48.725 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=1a4ec5d3-f7db-4971-b305-12f61d1f6f36 00:29:48.725 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 1a4ec5d3-f7db-4971-b305-12f61d1f6f36 00:29:49.291 10:47:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 3416871 00:29:57.397 Initializing NVMe Controllers 00:29:57.397 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:29:57.397 Controller IO queue size 128, less than required. 00:29:57.397 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:29:57.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:29:57.397 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:29:57.397 Initialization complete. Launching workers. 00:29:57.397 ======================================================== 00:29:57.397 Latency(us) 00:29:57.397 Device Information : IOPS MiB/s Average min max 00:29:57.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12255.00 47.87 10445.35 1551.53 62193.60 00:29:57.397 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 12518.60 48.90 10225.55 1099.82 62870.34 00:29:57.397 ======================================================== 00:29:57.397 Total : 24773.60 96.77 10334.28 1099.82 62870.34 00:29:57.397 00:29:57.397 10:47:37 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:57.397 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 808320e6-07ac-4c12-bdf3-6aa41aecff4b 00:29:57.655 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 9202f50b-2444-4b78-afa4-2449635259b6 00:29:57.912 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:29:57.912 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:29:57.912 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:29:57.912 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@335 -- # nvmfcleanup 00:29:57.912 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@99 -- # sync 00:29:57.912 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:29:57.912 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@102 -- # set +e 00:29:57.912 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@103 -- # for i in {1..20} 00:29:57.912 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:29:57.912 rmmod nvme_tcp 00:29:57.912 rmmod nvme_fabrics 00:29:57.912 rmmod nvme_keyring 00:29:57.912 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:29:57.912 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@106 -- # set -e 00:29:57.912 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@107 -- # return 0 00:29:57.912 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@336 -- # '[' -n 3416390 ']' 00:29:57.912 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@337 -- # killprocess 3416390 00:29:57.913 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@954 -- # '[' -z 3416390 ']' 00:29:57.913 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@958 -- # kill -0 3416390 00:29:57.913 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # uname 00:29:57.913 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:57.913 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3416390 00:29:57.913 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:29:57.913 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:29:57.913 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3416390' 00:29:57.913 killing process with pid 3416390 00:29:57.913 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@973 -- # kill 3416390 00:29:57.913 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@978 -- # wait 3416390 00:29:58.171 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:29:58.171 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@342 -- # nvmf_fini 00:29:58.171 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@264 -- # local dev 00:29:58.171 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@267 -- # remove_target_ns 00:29:58.171 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:29:58.171 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:29:58.171 10:47:38 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:00.709 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@268 -- # delete_main_bridge 00:30:00.709 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:00.709 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@130 -- # return 0 00:30:00.709 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:00.709 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:00.709 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:00.709 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:30:00.709 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:30:00.709 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:00.709 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:30:00.709 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:30:00.709 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:00.709 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:00.709 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:00.709 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:30:00.710 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:30:00.710 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:00.710 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:30:00.710 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:30:00.710 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:30:00.710 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # _dev=0 00:30:00.710 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@41 -- # dev_map=() 00:30:00.710 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/setup.sh@284 -- # iptr 00:30:00.710 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@542 -- # iptables-save 00:30:00.710 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:30:00.710 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- nvmf/common.sh@542 -- # iptables-restore 00:30:00.710 00:30:00.710 real 0m21.841s 00:30:00.710 user 0m55.420s 00:30:00.710 sys 0m9.843s 00:30:00.710 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:00.710 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:30:00.710 ************************************ 00:30:00.710 END TEST nvmf_lvol 00:30:00.710 ************************************ 00:30:00.710 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:00.710 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:00.710 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:00.710 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:00.710 ************************************ 00:30:00.710 START TEST nvmf_lvs_grow 00:30:00.710 ************************************ 00:30:00.710 10:47:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp --interrupt-mode 00:30:00.710 * Looking for test storage... 00:30:00.710 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lcov --version 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # IFS=.-: 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@336 -- # read -ra ver1 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # IFS=.-: 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@337 -- # read -ra ver2 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@338 -- # local 'op=<' 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@340 -- # ver1_l=2 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@341 -- # ver2_l=1 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@344 -- # case "$op" in 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@345 -- # : 1 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # decimal 1 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=1 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 1 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@365 -- # ver1[v]=1 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # decimal 2 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@353 -- # local d=2 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@355 -- # echo 2 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@366 -- # ver2[v]=2 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@368 -- # return 0 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:00.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.710 --rc genhtml_branch_coverage=1 00:30:00.710 --rc genhtml_function_coverage=1 00:30:00.710 --rc genhtml_legend=1 00:30:00.710 --rc geninfo_all_blocks=1 00:30:00.710 --rc geninfo_unexecuted_blocks=1 00:30:00.710 00:30:00.710 ' 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:00.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.710 --rc genhtml_branch_coverage=1 00:30:00.710 --rc genhtml_function_coverage=1 00:30:00.710 --rc genhtml_legend=1 00:30:00.710 --rc geninfo_all_blocks=1 00:30:00.710 --rc geninfo_unexecuted_blocks=1 00:30:00.710 00:30:00.710 ' 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:00.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.710 --rc genhtml_branch_coverage=1 00:30:00.710 --rc genhtml_function_coverage=1 00:30:00.710 --rc genhtml_legend=1 00:30:00.710 --rc geninfo_all_blocks=1 00:30:00.710 --rc geninfo_unexecuted_blocks=1 00:30:00.710 00:30:00.710 ' 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:00.710 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:00.710 --rc genhtml_branch_coverage=1 00:30:00.710 --rc genhtml_function_coverage=1 00:30:00.710 --rc genhtml_legend=1 00:30:00.710 --rc geninfo_all_blocks=1 00:30:00.710 --rc geninfo_unexecuted_blocks=1 00:30:00.710 00:30:00.710 ' 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@15 -- # shopt -s extglob 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:00.710 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@50 -- # : 0 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@260 -- # remove_target_ns 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@125 -- # xtrace_disable 00:30:00.711 10:47:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@131 -- # pci_devs=() 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@135 -- # net_devs=() 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@136 -- # e810=() 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@136 -- # local -ga e810 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@137 -- # x722=() 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@137 -- # local -ga x722 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@138 -- # mlx=() 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@138 -- # local -ga mlx 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:07.284 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:07.284 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:07.284 Found net devices under 0000:86:00.0: cvl_0_0 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:07.284 Found net devices under 0000:86:00.1: cvl_0_1 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@262 -- # is_hw=yes 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@257 -- # create_target_ns 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@28 -- # local -g _dev 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:07.284 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # ips=() 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772161 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:07.285 10.0.0.1 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@11 -- # local val=167772162 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:07.285 10.0.0.2 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:07.285 10:47:46 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:07.285 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:07.285 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:30:07.285 00:30:07.285 --- 10.0.0.1 ping statistics --- 00:30:07.285 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.285 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:07.285 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:30:07.286 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:07.286 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.220 ms 00:30:07.286 00:30:07.286 --- 10.0.0.2 ping statistics --- 00:30:07.286 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:07.286 rtt min/avg/max/mdev = 0.220/0.220/0.220/0.000 ms 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair++ )) 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@270 -- # return 0 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=initiator1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # return 1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev= 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@169 -- # return 0 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target0 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # get_net_dev target1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@107 -- # local dev=target1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@109 -- # return 1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@168 -- # dev= 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@169 -- # return 0 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:07.286 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@328 -- # nvmfpid=3422041 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@329 -- # waitforlisten 3422041 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@835 -- # '[' -z 3422041 ']' 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:07.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:07.287 [2024-11-20 10:47:47.288033] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:07.287 [2024-11-20 10:47:47.288915] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:30:07.287 [2024-11-20 10:47:47.288948] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:07.287 [2024-11-20 10:47:47.350476] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:07.287 [2024-11-20 10:47:47.391694] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:07.287 [2024-11-20 10:47:47.391729] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:07.287 [2024-11-20 10:47:47.391737] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:07.287 [2024-11-20 10:47:47.391742] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:07.287 [2024-11-20 10:47:47.391747] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:07.287 [2024-11-20 10:47:47.392283] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:07.287 [2024-11-20 10:47:47.458559] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:07.287 [2024-11-20 10:47:47.458769] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@868 -- # return 0 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:30:07.287 [2024-11-20 10:47:47.700925] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:07.287 ************************************ 00:30:07.287 START TEST lvs_grow_clean 00:30:07.287 ************************************ 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1129 -- # lvs_grow 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:07.287 10:47:47 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:07.546 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=2b7350ff-8373-4579-afca-af4498a6f819 00:30:07.546 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b7350ff-8373-4579-afca-af4498a6f819 00:30:07.546 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:07.805 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:07.805 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:07.805 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 2b7350ff-8373-4579-afca-af4498a6f819 lvol 150 00:30:08.063 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=96fefdd7-3aac-456f-bf4e-0fb00178eaa9 00:30:08.063 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:08.063 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:08.063 [2024-11-20 10:47:48.736663] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:08.063 [2024-11-20 10:47:48.736790] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:08.063 true 00:30:08.063 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b7350ff-8373-4579-afca-af4498a6f819 00:30:08.063 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:08.321 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:08.321 10:47:48 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:08.585 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 96fefdd7-3aac-456f-bf4e-0fb00178eaa9 00:30:08.845 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:08.845 [2024-11-20 10:47:49.481100] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:08.845 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:09.104 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3422535 00:30:09.104 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:09.104 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:09.104 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3422535 /var/tmp/bdevperf.sock 00:30:09.104 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@835 -- # '[' -z 3422535 ']' 00:30:09.104 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:09.104 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:09.104 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:09.104 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:09.104 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:09.104 10:47:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:09.104 [2024-11-20 10:47:49.714480] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:30:09.104 [2024-11-20 10:47:49.714526] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3422535 ] 00:30:09.104 [2024-11-20 10:47:49.788816] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:09.104 [2024-11-20 10:47:49.829959] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:10.041 10:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:10.041 10:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@868 -- # return 0 00:30:10.041 10:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:10.300 Nvme0n1 00:30:10.300 10:47:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:10.300 [ 00:30:10.300 { 00:30:10.300 "name": "Nvme0n1", 00:30:10.300 "aliases": [ 00:30:10.300 "96fefdd7-3aac-456f-bf4e-0fb00178eaa9" 00:30:10.300 ], 00:30:10.300 "product_name": "NVMe disk", 00:30:10.300 "block_size": 4096, 00:30:10.300 "num_blocks": 38912, 00:30:10.300 "uuid": "96fefdd7-3aac-456f-bf4e-0fb00178eaa9", 00:30:10.300 "numa_id": 1, 00:30:10.300 "assigned_rate_limits": { 00:30:10.300 "rw_ios_per_sec": 0, 00:30:10.300 "rw_mbytes_per_sec": 0, 00:30:10.300 "r_mbytes_per_sec": 0, 00:30:10.300 "w_mbytes_per_sec": 0 00:30:10.300 }, 00:30:10.300 "claimed": false, 00:30:10.300 "zoned": false, 00:30:10.300 "supported_io_types": { 00:30:10.300 "read": true, 00:30:10.300 "write": true, 00:30:10.300 "unmap": true, 00:30:10.300 "flush": true, 00:30:10.300 "reset": true, 00:30:10.300 "nvme_admin": true, 00:30:10.300 "nvme_io": true, 00:30:10.300 "nvme_io_md": false, 00:30:10.300 "write_zeroes": true, 00:30:10.300 "zcopy": false, 00:30:10.300 "get_zone_info": false, 00:30:10.300 "zone_management": false, 00:30:10.300 "zone_append": false, 00:30:10.300 "compare": true, 00:30:10.300 "compare_and_write": true, 00:30:10.300 "abort": true, 00:30:10.300 "seek_hole": false, 00:30:10.300 "seek_data": false, 00:30:10.300 "copy": true, 00:30:10.300 "nvme_iov_md": false 00:30:10.300 }, 00:30:10.300 "memory_domains": [ 00:30:10.300 { 00:30:10.300 "dma_device_id": "system", 00:30:10.300 "dma_device_type": 1 00:30:10.300 } 00:30:10.300 ], 00:30:10.300 "driver_specific": { 00:30:10.300 "nvme": [ 00:30:10.300 { 00:30:10.300 "trid": { 00:30:10.300 "trtype": "TCP", 00:30:10.300 "adrfam": "IPv4", 00:30:10.300 "traddr": "10.0.0.2", 00:30:10.300 "trsvcid": "4420", 00:30:10.300 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:10.300 }, 00:30:10.300 "ctrlr_data": { 00:30:10.300 "cntlid": 1, 00:30:10.300 "vendor_id": "0x8086", 00:30:10.300 "model_number": "SPDK bdev Controller", 00:30:10.300 "serial_number": "SPDK0", 00:30:10.300 "firmware_revision": "25.01", 00:30:10.300 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:10.300 "oacs": { 00:30:10.300 "security": 0, 00:30:10.300 "format": 0, 00:30:10.300 "firmware": 0, 00:30:10.300 "ns_manage": 0 00:30:10.300 }, 00:30:10.300 "multi_ctrlr": true, 00:30:10.300 "ana_reporting": false 00:30:10.300 }, 00:30:10.300 "vs": { 00:30:10.300 "nvme_version": "1.3" 00:30:10.300 }, 00:30:10.300 "ns_data": { 00:30:10.300 "id": 1, 00:30:10.300 "can_share": true 00:30:10.300 } 00:30:10.300 } 00:30:10.300 ], 00:30:10.300 "mp_policy": "active_passive" 00:30:10.300 } 00:30:10.300 } 00:30:10.301 ] 00:30:10.559 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3422763 00:30:10.559 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:10.559 10:47:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:10.559 Running I/O for 10 seconds... 00:30:11.548 Latency(us) 00:30:11.548 [2024-11-20T09:47:52.279Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:11.548 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:11.548 Nvme0n1 : 1.00 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:30:11.548 [2024-11-20T09:47:52.279Z] =================================================================================================================== 00:30:11.548 [2024-11-20T09:47:52.279Z] Total : 22098.00 86.32 0.00 0.00 0.00 0.00 0.00 00:30:11.548 00:30:12.484 10:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 2b7350ff-8373-4579-afca-af4498a6f819 00:30:12.484 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:12.484 Nvme0n1 : 2.00 22511.00 87.93 0.00 0.00 0.00 0.00 0.00 00:30:12.484 [2024-11-20T09:47:53.215Z] =================================================================================================================== 00:30:12.484 [2024-11-20T09:47:53.215Z] Total : 22511.00 87.93 0.00 0.00 0.00 0.00 0.00 00:30:12.484 00:30:12.743 true 00:30:12.743 10:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b7350ff-8373-4579-afca-af4498a6f819 00:30:12.743 10:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:12.743 10:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:12.743 10:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:12.743 10:47:53 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 3422763 00:30:13.679 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:13.680 Nvme0n1 : 3.00 22627.33 88.39 0.00 0.00 0.00 0.00 0.00 00:30:13.680 [2024-11-20T09:47:54.411Z] =================================================================================================================== 00:30:13.680 [2024-11-20T09:47:54.411Z] Total : 22627.33 88.39 0.00 0.00 0.00 0.00 0.00 00:30:13.680 00:30:14.615 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:14.615 Nvme0n1 : 4.00 22717.25 88.74 0.00 0.00 0.00 0.00 0.00 00:30:14.615 [2024-11-20T09:47:55.346Z] =================================================================================================================== 00:30:14.615 [2024-11-20T09:47:55.346Z] Total : 22717.25 88.74 0.00 0.00 0.00 0.00 0.00 00:30:14.615 00:30:15.551 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:15.551 Nvme0n1 : 5.00 22796.60 89.05 0.00 0.00 0.00 0.00 0.00 00:30:15.551 [2024-11-20T09:47:56.282Z] =================================================================================================================== 00:30:15.551 [2024-11-20T09:47:56.282Z] Total : 22796.60 89.05 0.00 0.00 0.00 0.00 0.00 00:30:15.551 00:30:16.487 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:16.487 Nvme0n1 : 6.00 22849.50 89.26 0.00 0.00 0.00 0.00 0.00 00:30:16.487 [2024-11-20T09:47:57.218Z] =================================================================================================================== 00:30:16.487 [2024-11-20T09:47:57.218Z] Total : 22849.50 89.26 0.00 0.00 0.00 0.00 0.00 00:30:16.487 00:30:17.423 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:17.423 Nvme0n1 : 7.00 22887.29 89.40 0.00 0.00 0.00 0.00 0.00 00:30:17.423 [2024-11-20T09:47:58.154Z] =================================================================================================================== 00:30:17.423 [2024-11-20T09:47:58.154Z] Total : 22887.29 89.40 0.00 0.00 0.00 0.00 0.00 00:30:17.423 00:30:18.799 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:18.799 Nvme0n1 : 8.00 22915.62 89.51 0.00 0.00 0.00 0.00 0.00 00:30:18.799 [2024-11-20T09:47:59.530Z] =================================================================================================================== 00:30:18.799 [2024-11-20T09:47:59.530Z] Total : 22915.62 89.51 0.00 0.00 0.00 0.00 0.00 00:30:18.799 00:30:19.735 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:19.735 Nvme0n1 : 9.00 22923.56 89.55 0.00 0.00 0.00 0.00 0.00 00:30:19.735 [2024-11-20T09:48:00.466Z] =================================================================================================================== 00:30:19.735 [2024-11-20T09:48:00.466Z] Total : 22923.56 89.55 0.00 0.00 0.00 0.00 0.00 00:30:19.735 00:30:20.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.671 Nvme0n1 : 10.00 22853.70 89.27 0.00 0.00 0.00 0.00 0.00 00:30:20.671 [2024-11-20T09:48:01.402Z] =================================================================================================================== 00:30:20.671 [2024-11-20T09:48:01.402Z] Total : 22853.70 89.27 0.00 0.00 0.00 0.00 0.00 00:30:20.671 00:30:20.671 00:30:20.671 Latency(us) 00:30:20.671 [2024-11-20T09:48:01.402Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.671 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:20.671 Nvme0n1 : 10.00 22856.12 89.28 0.00 0.00 5597.51 3557.67 27962.03 00:30:20.671 [2024-11-20T09:48:01.402Z] =================================================================================================================== 00:30:20.671 [2024-11-20T09:48:01.402Z] Total : 22856.12 89.28 0.00 0.00 5597.51 3557.67 27962.03 00:30:20.671 { 00:30:20.671 "results": [ 00:30:20.671 { 00:30:20.671 "job": "Nvme0n1", 00:30:20.671 "core_mask": "0x2", 00:30:20.671 "workload": "randwrite", 00:30:20.671 "status": "finished", 00:30:20.671 "queue_depth": 128, 00:30:20.671 "io_size": 4096, 00:30:20.671 "runtime": 10.004541, 00:30:20.671 "iops": 22856.12103543781, 00:30:20.671 "mibps": 89.28172279467894, 00:30:20.671 "io_failed": 0, 00:30:20.671 "io_timeout": 0, 00:30:20.671 "avg_latency_us": 5597.509761041573, 00:30:20.672 "min_latency_us": 3557.6685714285713, 00:30:20.672 "max_latency_us": 27962.02666666667 00:30:20.672 } 00:30:20.672 ], 00:30:20.672 "core_count": 1 00:30:20.672 } 00:30:20.672 10:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3422535 00:30:20.672 10:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # '[' -z 3422535 ']' 00:30:20.672 10:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # kill -0 3422535 00:30:20.672 10:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # uname 00:30:20.672 10:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:20.672 10:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3422535 00:30:20.672 10:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:20.672 10:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:20.672 10:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3422535' 00:30:20.672 killing process with pid 3422535 00:30:20.672 10:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@973 -- # kill 3422535 00:30:20.672 Received shutdown signal, test time was about 10.000000 seconds 00:30:20.672 00:30:20.672 Latency(us) 00:30:20.672 [2024-11-20T09:48:01.403Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.672 [2024-11-20T09:48:01.403Z] =================================================================================================================== 00:30:20.672 [2024-11-20T09:48:01.403Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:20.672 10:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@978 -- # wait 3422535 00:30:20.672 10:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:20.930 10:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:21.188 10:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b7350ff-8373-4579-afca-af4498a6f819 00:30:21.188 10:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:21.447 10:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:21.447 10:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:30:21.447 10:48:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:21.447 [2024-11-20 10:48:02.152729] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:21.706 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b7350ff-8373-4579-afca-af4498a6f819 00:30:21.706 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@652 -- # local es=0 00:30:21.706 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b7350ff-8373-4579-afca-af4498a6f819 00:30:21.706 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:21.706 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.706 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:21.706 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.706 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:21.706 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:21.706 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:21.706 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:21.706 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b7350ff-8373-4579-afca-af4498a6f819 00:30:21.706 request: 00:30:21.706 { 00:30:21.706 "uuid": "2b7350ff-8373-4579-afca-af4498a6f819", 00:30:21.706 "method": "bdev_lvol_get_lvstores", 00:30:21.706 "req_id": 1 00:30:21.706 } 00:30:21.706 Got JSON-RPC error response 00:30:21.706 response: 00:30:21.706 { 00:30:21.706 "code": -19, 00:30:21.706 "message": "No such device" 00:30:21.706 } 00:30:21.706 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@655 -- # es=1 00:30:21.706 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:21.706 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:21.706 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:21.706 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:21.966 aio_bdev 00:30:21.966 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 96fefdd7-3aac-456f-bf4e-0fb00178eaa9 00:30:21.966 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@903 -- # local bdev_name=96fefdd7-3aac-456f-bf4e-0fb00178eaa9 00:30:21.966 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:21.966 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # local i 00:30:21.966 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:21.966 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:21.966 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:22.225 10:48:02 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 96fefdd7-3aac-456f-bf4e-0fb00178eaa9 -t 2000 00:30:22.484 [ 00:30:22.484 { 00:30:22.484 "name": "96fefdd7-3aac-456f-bf4e-0fb00178eaa9", 00:30:22.484 "aliases": [ 00:30:22.484 "lvs/lvol" 00:30:22.484 ], 00:30:22.484 "product_name": "Logical Volume", 00:30:22.484 "block_size": 4096, 00:30:22.484 "num_blocks": 38912, 00:30:22.484 "uuid": "96fefdd7-3aac-456f-bf4e-0fb00178eaa9", 00:30:22.484 "assigned_rate_limits": { 00:30:22.484 "rw_ios_per_sec": 0, 00:30:22.484 "rw_mbytes_per_sec": 0, 00:30:22.484 "r_mbytes_per_sec": 0, 00:30:22.484 "w_mbytes_per_sec": 0 00:30:22.484 }, 00:30:22.484 "claimed": false, 00:30:22.484 "zoned": false, 00:30:22.484 "supported_io_types": { 00:30:22.484 "read": true, 00:30:22.484 "write": true, 00:30:22.484 "unmap": true, 00:30:22.484 "flush": false, 00:30:22.484 "reset": true, 00:30:22.484 "nvme_admin": false, 00:30:22.484 "nvme_io": false, 00:30:22.484 "nvme_io_md": false, 00:30:22.484 "write_zeroes": true, 00:30:22.484 "zcopy": false, 00:30:22.484 "get_zone_info": false, 00:30:22.484 "zone_management": false, 00:30:22.484 "zone_append": false, 00:30:22.484 "compare": false, 00:30:22.484 "compare_and_write": false, 00:30:22.484 "abort": false, 00:30:22.484 "seek_hole": true, 00:30:22.484 "seek_data": true, 00:30:22.484 "copy": false, 00:30:22.484 "nvme_iov_md": false 00:30:22.484 }, 00:30:22.484 "driver_specific": { 00:30:22.484 "lvol": { 00:30:22.484 "lvol_store_uuid": "2b7350ff-8373-4579-afca-af4498a6f819", 00:30:22.484 "base_bdev": "aio_bdev", 00:30:22.484 "thin_provision": false, 00:30:22.484 "num_allocated_clusters": 38, 00:30:22.484 "snapshot": false, 00:30:22.484 "clone": false, 00:30:22.484 "esnap_clone": false 00:30:22.484 } 00:30:22.484 } 00:30:22.484 } 00:30:22.484 ] 00:30:22.484 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@911 -- # return 0 00:30:22.484 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b7350ff-8373-4579-afca-af4498a6f819 00:30:22.484 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:22.484 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:22.484 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 2b7350ff-8373-4579-afca-af4498a6f819 00:30:22.484 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:22.743 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:22.743 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 96fefdd7-3aac-456f-bf4e-0fb00178eaa9 00:30:23.002 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 2b7350ff-8373-4579-afca-af4498a6f819 00:30:23.261 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:23.520 10:48:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:23.520 00:30:23.520 real 0m16.256s 00:30:23.520 user 0m15.916s 00:30:23.520 sys 0m1.535s 00:30:23.520 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:23.520 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:30:23.520 ************************************ 00:30:23.520 END TEST lvs_grow_clean 00:30:23.520 ************************************ 00:30:23.520 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:30:23.520 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:30:23.520 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:23.520 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:23.520 ************************************ 00:30:23.520 START TEST lvs_grow_dirty 00:30:23.520 ************************************ 00:30:23.520 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1129 -- # lvs_grow dirty 00:30:23.520 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:30:23.520 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:30:23.520 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:30:23.520 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:30:23.520 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:30:23.520 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:30:23.520 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:23.520 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:23.521 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:23.780 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:30:23.780 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:30:23.780 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=3252a464-d8b2-476e-86af-36d5b631df9f 00:30:23.780 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252a464-d8b2-476e-86af-36d5b631df9f 00:30:23.780 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:30:24.039 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:30:24.039 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:30:24.039 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 3252a464-d8b2-476e-86af-36d5b631df9f lvol 150 00:30:24.297 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=1d830457-6b42-4703-8eea-ca1fef48b8a6 00:30:24.297 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:24.297 10:48:04 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:30:24.556 [2024-11-20 10:48:05.084689] bdev_aio.c:1053:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:30:24.556 [2024-11-20 10:48:05.084823] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:30:24.556 true 00:30:24.556 10:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252a464-d8b2-476e-86af-36d5b631df9f 00:30:24.556 10:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:30:24.814 10:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:30:24.814 10:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:30:24.814 10:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 1d830457-6b42-4703-8eea-ca1fef48b8a6 00:30:25.074 10:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:30:25.333 [2024-11-20 10:48:05.821062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:25.333 10:48:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:25.333 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=3425410 00:30:25.333 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:30:25.333 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:30:25.333 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 3425410 /var/tmp/bdevperf.sock 00:30:25.333 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3425410 ']' 00:30:25.333 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:30:25.333 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:25.333 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:30:25.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:30:25.333 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:25.333 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:25.592 [2024-11-20 10:48:06.061792] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:30:25.592 [2024-11-20 10:48:06.061841] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3425410 ] 00:30:25.592 [2024-11-20 10:48:06.118049] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:25.592 [2024-11-20 10:48:06.160436] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.592 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:25.592 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:25.592 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:30:26.159 Nvme0n1 00:30:26.159 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:30:26.159 [ 00:30:26.159 { 00:30:26.159 "name": "Nvme0n1", 00:30:26.159 "aliases": [ 00:30:26.159 "1d830457-6b42-4703-8eea-ca1fef48b8a6" 00:30:26.159 ], 00:30:26.159 "product_name": "NVMe disk", 00:30:26.159 "block_size": 4096, 00:30:26.159 "num_blocks": 38912, 00:30:26.159 "uuid": "1d830457-6b42-4703-8eea-ca1fef48b8a6", 00:30:26.159 "numa_id": 1, 00:30:26.159 "assigned_rate_limits": { 00:30:26.159 "rw_ios_per_sec": 0, 00:30:26.159 "rw_mbytes_per_sec": 0, 00:30:26.159 "r_mbytes_per_sec": 0, 00:30:26.159 "w_mbytes_per_sec": 0 00:30:26.159 }, 00:30:26.159 "claimed": false, 00:30:26.159 "zoned": false, 00:30:26.159 "supported_io_types": { 00:30:26.159 "read": true, 00:30:26.159 "write": true, 00:30:26.159 "unmap": true, 00:30:26.159 "flush": true, 00:30:26.159 "reset": true, 00:30:26.159 "nvme_admin": true, 00:30:26.159 "nvme_io": true, 00:30:26.159 "nvme_io_md": false, 00:30:26.159 "write_zeroes": true, 00:30:26.159 "zcopy": false, 00:30:26.159 "get_zone_info": false, 00:30:26.159 "zone_management": false, 00:30:26.159 "zone_append": false, 00:30:26.159 "compare": true, 00:30:26.159 "compare_and_write": true, 00:30:26.159 "abort": true, 00:30:26.159 "seek_hole": false, 00:30:26.159 "seek_data": false, 00:30:26.159 "copy": true, 00:30:26.159 "nvme_iov_md": false 00:30:26.159 }, 00:30:26.159 "memory_domains": [ 00:30:26.159 { 00:30:26.159 "dma_device_id": "system", 00:30:26.159 "dma_device_type": 1 00:30:26.159 } 00:30:26.159 ], 00:30:26.159 "driver_specific": { 00:30:26.159 "nvme": [ 00:30:26.159 { 00:30:26.159 "trid": { 00:30:26.159 "trtype": "TCP", 00:30:26.159 "adrfam": "IPv4", 00:30:26.159 "traddr": "10.0.0.2", 00:30:26.159 "trsvcid": "4420", 00:30:26.159 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:30:26.159 }, 00:30:26.159 "ctrlr_data": { 00:30:26.159 "cntlid": 1, 00:30:26.159 "vendor_id": "0x8086", 00:30:26.159 "model_number": "SPDK bdev Controller", 00:30:26.159 "serial_number": "SPDK0", 00:30:26.159 "firmware_revision": "25.01", 00:30:26.159 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:26.159 "oacs": { 00:30:26.159 "security": 0, 00:30:26.159 "format": 0, 00:30:26.159 "firmware": 0, 00:30:26.159 "ns_manage": 0 00:30:26.159 }, 00:30:26.159 "multi_ctrlr": true, 00:30:26.159 "ana_reporting": false 00:30:26.159 }, 00:30:26.159 "vs": { 00:30:26.159 "nvme_version": "1.3" 00:30:26.159 }, 00:30:26.159 "ns_data": { 00:30:26.159 "id": 1, 00:30:26.159 "can_share": true 00:30:26.159 } 00:30:26.159 } 00:30:26.159 ], 00:30:26.159 "mp_policy": "active_passive" 00:30:26.159 } 00:30:26.159 } 00:30:26.159 ] 00:30:26.159 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=3425770 00:30:26.159 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:30:26.159 10:48:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:30:26.418 Running I/O for 10 seconds... 00:30:27.353 Latency(us) 00:30:27.353 [2024-11-20T09:48:08.084Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:27.353 Nvme0n1 : 1.00 22162.00 86.57 0.00 0.00 0.00 0.00 0.00 00:30:27.353 [2024-11-20T09:48:08.084Z] =================================================================================================================== 00:30:27.353 [2024-11-20T09:48:08.084Z] Total : 22162.00 86.57 0.00 0.00 0.00 0.00 0.00 00:30:27.353 00:30:28.288 10:48:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 3252a464-d8b2-476e-86af-36d5b631df9f 00:30:28.288 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:28.288 Nvme0n1 : 2.00 22528.00 88.00 0.00 0.00 0.00 0.00 0.00 00:30:28.288 [2024-11-20T09:48:09.019Z] =================================================================================================================== 00:30:28.288 [2024-11-20T09:48:09.019Z] Total : 22528.00 88.00 0.00 0.00 0.00 0.00 0.00 00:30:28.288 00:30:28.546 true 00:30:28.546 10:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252a464-d8b2-476e-86af-36d5b631df9f 00:30:28.546 10:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:30:28.546 10:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:30:28.546 10:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:30:28.546 10:48:09 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 3425770 00:30:29.482 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:29.482 Nvme0n1 : 3.00 22638.67 88.43 0.00 0.00 0.00 0.00 0.00 00:30:29.482 [2024-11-20T09:48:10.213Z] =================================================================================================================== 00:30:29.482 [2024-11-20T09:48:10.213Z] Total : 22638.67 88.43 0.00 0.00 0.00 0.00 0.00 00:30:29.482 00:30:30.417 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:30.417 Nvme0n1 : 4.00 22630.50 88.40 0.00 0.00 0.00 0.00 0.00 00:30:30.417 [2024-11-20T09:48:11.148Z] =================================================================================================================== 00:30:30.417 [2024-11-20T09:48:11.148Z] Total : 22630.50 88.40 0.00 0.00 0.00 0.00 0.00 00:30:30.417 00:30:31.353 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:31.353 Nvme0n1 : 5.00 22701.80 88.68 0.00 0.00 0.00 0.00 0.00 00:30:31.353 [2024-11-20T09:48:12.084Z] =================================================================================================================== 00:30:31.353 [2024-11-20T09:48:12.084Z] Total : 22701.80 88.68 0.00 0.00 0.00 0.00 0.00 00:30:31.353 00:30:32.289 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:32.289 Nvme0n1 : 6.00 22770.50 88.95 0.00 0.00 0.00 0.00 0.00 00:30:32.289 [2024-11-20T09:48:13.020Z] =================================================================================================================== 00:30:32.289 [2024-11-20T09:48:13.020Z] Total : 22770.50 88.95 0.00 0.00 0.00 0.00 0.00 00:30:32.289 00:30:33.665 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:33.665 Nvme0n1 : 7.00 22783.29 89.00 0.00 0.00 0.00 0.00 0.00 00:30:33.665 [2024-11-20T09:48:14.396Z] =================================================================================================================== 00:30:33.665 [2024-11-20T09:48:14.396Z] Total : 22783.29 89.00 0.00 0.00 0.00 0.00 0.00 00:30:33.665 00:30:34.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:34.233 Nvme0n1 : 8.00 22840.50 89.22 0.00 0.00 0.00 0.00 0.00 00:30:34.233 [2024-11-20T09:48:14.964Z] =================================================================================================================== 00:30:34.233 [2024-11-20T09:48:14.964Z] Total : 22840.50 89.22 0.00 0.00 0.00 0.00 0.00 00:30:34.233 00:30:35.610 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:35.610 Nvme0n1 : 9.00 22870.89 89.34 0.00 0.00 0.00 0.00 0.00 00:30:35.610 [2024-11-20T09:48:16.341Z] =================================================================================================================== 00:30:35.610 [2024-11-20T09:48:16.341Z] Total : 22870.89 89.34 0.00 0.00 0.00 0.00 0.00 00:30:35.610 00:30:36.545 00:30:36.545 Latency(us) 00:30:36.545 [2024-11-20T09:48:17.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.545 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:30:36.545 Nvme0n1 : 10.00 22905.37 89.47 0.00 0.00 5585.41 3276.80 26339.23 00:30:36.545 [2024-11-20T09:48:17.276Z] =================================================================================================================== 00:30:36.545 [2024-11-20T09:48:17.276Z] Total : 22905.37 89.47 0.00 0.00 5585.41 3276.80 26339.23 00:30:36.545 { 00:30:36.545 "results": [ 00:30:36.545 { 00:30:36.545 "job": "Nvme0n1", 00:30:36.545 "core_mask": "0x2", 00:30:36.545 "workload": "randwrite", 00:30:36.545 "status": "finished", 00:30:36.545 "queue_depth": 128, 00:30:36.545 "io_size": 4096, 00:30:36.545 "runtime": 10.00115, 00:30:36.545 "iops": 22905.365882923463, 00:30:36.545 "mibps": 89.47408548016978, 00:30:36.545 "io_failed": 0, 00:30:36.545 "io_timeout": 0, 00:30:36.545 "avg_latency_us": 5585.41243173938, 00:30:36.545 "min_latency_us": 3276.8, 00:30:36.545 "max_latency_us": 26339.230476190478 00:30:36.545 } 00:30:36.545 ], 00:30:36.545 "core_count": 1 00:30:36.545 } 00:30:36.545 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 3425410 00:30:36.545 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # '[' -z 3425410 ']' 00:30:36.545 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # kill -0 3425410 00:30:36.545 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # uname 00:30:36.545 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:36.545 10:48:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3425410 00:30:36.545 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:30:36.545 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:30:36.545 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3425410' 00:30:36.545 killing process with pid 3425410 00:30:36.545 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@973 -- # kill 3425410 00:30:36.545 Received shutdown signal, test time was about 10.000000 seconds 00:30:36.545 00:30:36.545 Latency(us) 00:30:36.545 [2024-11-20T09:48:17.276Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:36.545 [2024-11-20T09:48:17.276Z] =================================================================================================================== 00:30:36.545 [2024-11-20T09:48:17.276Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:36.545 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@978 -- # wait 3425410 00:30:36.545 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:30:36.803 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:30:37.060 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252a464-d8b2-476e-86af-36d5b631df9f 00:30:37.060 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:30:37.060 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:30:37.060 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:30:37.060 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 3422041 00:30:37.060 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 3422041 00:30:37.318 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 3422041 Killed "${NVMF_APP[@]}" "$@" 00:30:37.318 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:30:37.318 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:30:37.318 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:37.318 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:37.318 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:37.318 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@328 -- # nvmfpid=3427632 00:30:37.318 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x1 00:30:37.318 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@329 -- # waitforlisten 3427632 00:30:37.318 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@835 -- # '[' -z 3427632 ']' 00:30:37.318 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:37.318 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:37.318 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:37.318 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:37.318 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:37.318 10:48:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:37.318 [2024-11-20 10:48:17.885141] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:37.318 [2024-11-20 10:48:17.886091] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:30:37.318 [2024-11-20 10:48:17.886129] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:37.318 [2024-11-20 10:48:17.963128] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:37.318 [2024-11-20 10:48:18.003174] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:37.318 [2024-11-20 10:48:18.003213] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:37.318 [2024-11-20 10:48:18.003220] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:37.318 [2024-11-20 10:48:18.003225] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:37.318 [2024-11-20 10:48:18.003230] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:37.318 [2024-11-20 10:48:18.003820] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:37.577 [2024-11-20 10:48:18.069612] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:37.577 [2024-11-20 10:48:18.069818] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:37.577 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:37.577 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@868 -- # return 0 00:30:37.577 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:37.577 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:37.577 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:37.577 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:37.577 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:37.836 [2024-11-20 10:48:18.305216] blobstore.c:4875:bs_recover: *NOTICE*: Performing recovery on blobstore 00:30:37.836 [2024-11-20 10:48:18.305410] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:30:37.836 [2024-11-20 10:48:18.305493] blobstore.c:4822:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:30:37.836 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:30:37.837 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev 1d830457-6b42-4703-8eea-ca1fef48b8a6 00:30:37.837 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1d830457-6b42-4703-8eea-ca1fef48b8a6 00:30:37.837 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:37.837 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:37.837 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:37.837 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:37.837 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:37.837 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1d830457-6b42-4703-8eea-ca1fef48b8a6 -t 2000 00:30:38.095 [ 00:30:38.096 { 00:30:38.096 "name": "1d830457-6b42-4703-8eea-ca1fef48b8a6", 00:30:38.096 "aliases": [ 00:30:38.096 "lvs/lvol" 00:30:38.096 ], 00:30:38.096 "product_name": "Logical Volume", 00:30:38.096 "block_size": 4096, 00:30:38.096 "num_blocks": 38912, 00:30:38.096 "uuid": "1d830457-6b42-4703-8eea-ca1fef48b8a6", 00:30:38.096 "assigned_rate_limits": { 00:30:38.096 "rw_ios_per_sec": 0, 00:30:38.096 "rw_mbytes_per_sec": 0, 00:30:38.096 "r_mbytes_per_sec": 0, 00:30:38.096 "w_mbytes_per_sec": 0 00:30:38.096 }, 00:30:38.096 "claimed": false, 00:30:38.096 "zoned": false, 00:30:38.096 "supported_io_types": { 00:30:38.096 "read": true, 00:30:38.096 "write": true, 00:30:38.096 "unmap": true, 00:30:38.096 "flush": false, 00:30:38.096 "reset": true, 00:30:38.096 "nvme_admin": false, 00:30:38.096 "nvme_io": false, 00:30:38.096 "nvme_io_md": false, 00:30:38.096 "write_zeroes": true, 00:30:38.096 "zcopy": false, 00:30:38.096 "get_zone_info": false, 00:30:38.096 "zone_management": false, 00:30:38.096 "zone_append": false, 00:30:38.096 "compare": false, 00:30:38.096 "compare_and_write": false, 00:30:38.096 "abort": false, 00:30:38.096 "seek_hole": true, 00:30:38.096 "seek_data": true, 00:30:38.096 "copy": false, 00:30:38.096 "nvme_iov_md": false 00:30:38.096 }, 00:30:38.096 "driver_specific": { 00:30:38.096 "lvol": { 00:30:38.096 "lvol_store_uuid": "3252a464-d8b2-476e-86af-36d5b631df9f", 00:30:38.096 "base_bdev": "aio_bdev", 00:30:38.096 "thin_provision": false, 00:30:38.096 "num_allocated_clusters": 38, 00:30:38.096 "snapshot": false, 00:30:38.096 "clone": false, 00:30:38.096 "esnap_clone": false 00:30:38.096 } 00:30:38.096 } 00:30:38.096 } 00:30:38.096 ] 00:30:38.096 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:38.096 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252a464-d8b2-476e-86af-36d5b631df9f 00:30:38.096 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:30:38.355 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:30:38.355 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252a464-d8b2-476e-86af-36d5b631df9f 00:30:38.355 10:48:18 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:30:38.616 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:30:38.616 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:38.616 [2024-11-20 10:48:19.268300] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:30:38.616 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252a464-d8b2-476e-86af-36d5b631df9f 00:30:38.616 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@652 -- # local es=0 00:30:38.616 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@654 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252a464-d8b2-476e-86af-36d5b631df9f 00:30:38.616 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:38.616 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.616 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:38.616 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.616 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:38.616 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:30:38.616 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:30:38.616 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@646 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:30:38.616 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252a464-d8b2-476e-86af-36d5b631df9f 00:30:38.874 request: 00:30:38.874 { 00:30:38.875 "uuid": "3252a464-d8b2-476e-86af-36d5b631df9f", 00:30:38.875 "method": "bdev_lvol_get_lvstores", 00:30:38.875 "req_id": 1 00:30:38.875 } 00:30:38.875 Got JSON-RPC error response 00:30:38.875 response: 00:30:38.875 { 00:30:38.875 "code": -19, 00:30:38.875 "message": "No such device" 00:30:38.875 } 00:30:38.875 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@655 -- # es=1 00:30:38.875 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:30:38.875 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:30:38.875 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:30:38.875 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:30:39.133 aio_bdev 00:30:39.133 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 1d830457-6b42-4703-8eea-ca1fef48b8a6 00:30:39.133 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@903 -- # local bdev_name=1d830457-6b42-4703-8eea-ca1fef48b8a6 00:30:39.133 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:30:39.133 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # local i 00:30:39.134 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:30:39.134 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:30:39.134 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@908 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:30:39.394 10:48:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@910 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 1d830457-6b42-4703-8eea-ca1fef48b8a6 -t 2000 00:30:39.394 [ 00:30:39.394 { 00:30:39.394 "name": "1d830457-6b42-4703-8eea-ca1fef48b8a6", 00:30:39.394 "aliases": [ 00:30:39.394 "lvs/lvol" 00:30:39.394 ], 00:30:39.394 "product_name": "Logical Volume", 00:30:39.394 "block_size": 4096, 00:30:39.394 "num_blocks": 38912, 00:30:39.394 "uuid": "1d830457-6b42-4703-8eea-ca1fef48b8a6", 00:30:39.394 "assigned_rate_limits": { 00:30:39.394 "rw_ios_per_sec": 0, 00:30:39.394 "rw_mbytes_per_sec": 0, 00:30:39.394 "r_mbytes_per_sec": 0, 00:30:39.394 "w_mbytes_per_sec": 0 00:30:39.394 }, 00:30:39.394 "claimed": false, 00:30:39.394 "zoned": false, 00:30:39.394 "supported_io_types": { 00:30:39.394 "read": true, 00:30:39.394 "write": true, 00:30:39.394 "unmap": true, 00:30:39.394 "flush": false, 00:30:39.394 "reset": true, 00:30:39.394 "nvme_admin": false, 00:30:39.394 "nvme_io": false, 00:30:39.394 "nvme_io_md": false, 00:30:39.394 "write_zeroes": true, 00:30:39.394 "zcopy": false, 00:30:39.394 "get_zone_info": false, 00:30:39.394 "zone_management": false, 00:30:39.394 "zone_append": false, 00:30:39.394 "compare": false, 00:30:39.394 "compare_and_write": false, 00:30:39.394 "abort": false, 00:30:39.394 "seek_hole": true, 00:30:39.394 "seek_data": true, 00:30:39.394 "copy": false, 00:30:39.394 "nvme_iov_md": false 00:30:39.394 }, 00:30:39.394 "driver_specific": { 00:30:39.394 "lvol": { 00:30:39.394 "lvol_store_uuid": "3252a464-d8b2-476e-86af-36d5b631df9f", 00:30:39.394 "base_bdev": "aio_bdev", 00:30:39.394 "thin_provision": false, 00:30:39.394 "num_allocated_clusters": 38, 00:30:39.394 "snapshot": false, 00:30:39.394 "clone": false, 00:30:39.394 "esnap_clone": false 00:30:39.394 } 00:30:39.394 } 00:30:39.394 } 00:30:39.394 ] 00:30:39.394 10:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@911 -- # return 0 00:30:39.394 10:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252a464-d8b2-476e-86af-36d5b631df9f 00:30:39.394 10:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:30:39.652 10:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:30:39.652 10:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 3252a464-d8b2-476e-86af-36d5b631df9f 00:30:39.652 10:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:30:39.911 10:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:30:39.911 10:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 1d830457-6b42-4703-8eea-ca1fef48b8a6 00:30:40.170 10:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 3252a464-d8b2-476e-86af-36d5b631df9f 00:30:40.428 10:48:20 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:30:40.428 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:30:40.428 00:30:40.428 real 0m17.038s 00:30:40.428 user 0m34.491s 00:30:40.428 sys 0m3.748s 00:30:40.428 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:40.428 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:30:40.428 ************************************ 00:30:40.428 END TEST lvs_grow_dirty 00:30:40.428 ************************************ 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # type=--id 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@813 -- # id=0 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # '[' --id = --pid ']' 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # shm_files=nvmf_trace.0 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@820 -- # [[ -z nvmf_trace.0 ]] 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@824 -- # for n in $shm_files 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@825 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:30:40.687 nvmf_trace.0 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@827 -- # return 0 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@99 -- # sync 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@102 -- # set +e 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:40.687 rmmod nvme_tcp 00:30:40.687 rmmod nvme_fabrics 00:30:40.687 rmmod nvme_keyring 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@106 -- # set -e 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@107 -- # return 0 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@336 -- # '[' -n 3427632 ']' 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@337 -- # killprocess 3427632 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # '[' -z 3427632 ']' 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # kill -0 3427632 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # uname 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3427632 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3427632' 00:30:40.687 killing process with pid 3427632 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@973 -- # kill 3427632 00:30:40.687 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@978 -- # wait 3427632 00:30:40.946 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:40.946 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@342 -- # nvmf_fini 00:30:40.946 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@264 -- # local dev 00:30:40.946 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@267 -- # remove_target_ns 00:30:40.946 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:40.946 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:40.946 10:48:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@268 -- # delete_main_bridge 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@130 -- # return 0 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # _dev=0 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@41 -- # dev_map=() 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/setup.sh@284 -- # iptr 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@542 -- # iptables-save 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- nvmf/common.sh@542 -- # iptables-restore 00:30:42.851 00:30:42.851 real 0m42.655s 00:30:42.851 user 0m53.072s 00:30:42.851 sys 0m10.230s 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:42.851 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:30:42.851 ************************************ 00:30:42.851 END TEST nvmf_lvs_grow 00:30:42.851 ************************************ 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@24 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:43.112 ************************************ 00:30:43.112 START TEST nvmf_bdev_io_wait 00:30:43.112 ************************************ 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp --interrupt-mode 00:30:43.112 * Looking for test storage... 00:30:43.112 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lcov --version 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # IFS=.-: 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@336 -- # read -ra ver1 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # IFS=.-: 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@337 -- # read -ra ver2 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@338 -- # local 'op=<' 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@340 -- # ver1_l=2 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@341 -- # ver2_l=1 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@344 -- # case "$op" in 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@345 -- # : 1 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # decimal 1 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=1 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 1 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@365 -- # ver1[v]=1 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # decimal 2 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@353 -- # local d=2 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@355 -- # echo 2 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@366 -- # ver2[v]=2 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@368 -- # return 0 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:43.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.112 --rc genhtml_branch_coverage=1 00:30:43.112 --rc genhtml_function_coverage=1 00:30:43.112 --rc genhtml_legend=1 00:30:43.112 --rc geninfo_all_blocks=1 00:30:43.112 --rc geninfo_unexecuted_blocks=1 00:30:43.112 00:30:43.112 ' 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:43.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.112 --rc genhtml_branch_coverage=1 00:30:43.112 --rc genhtml_function_coverage=1 00:30:43.112 --rc genhtml_legend=1 00:30:43.112 --rc geninfo_all_blocks=1 00:30:43.112 --rc geninfo_unexecuted_blocks=1 00:30:43.112 00:30:43.112 ' 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:43.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.112 --rc genhtml_branch_coverage=1 00:30:43.112 --rc genhtml_function_coverage=1 00:30:43.112 --rc genhtml_legend=1 00:30:43.112 --rc geninfo_all_blocks=1 00:30:43.112 --rc geninfo_unexecuted_blocks=1 00:30:43.112 00:30:43.112 ' 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:43.112 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:43.112 --rc genhtml_branch_coverage=1 00:30:43.112 --rc genhtml_function_coverage=1 00:30:43.112 --rc genhtml_legend=1 00:30:43.112 --rc geninfo_all_blocks=1 00:30:43.112 --rc geninfo_unexecuted_blocks=1 00:30:43.112 00:30:43.112 ' 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:43.112 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:43.371 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:43.371 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:43.371 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:43.371 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:43.371 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:43.371 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:43.371 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:43.371 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:43.371 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@15 -- # shopt -s extglob 00:30:43.371 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:43.371 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:43.371 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@50 -- # : 0 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # remove_target_ns 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # xtrace_disable 00:30:43.372 10:48:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:50.003 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:30:50.003 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # pci_devs=() 00:30:50.003 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@131 -- # local -a pci_devs 00:30:50.003 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # pci_net_devs=() 00:30:50.003 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:30:50.003 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # pci_drivers=() 00:30:50.003 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@133 -- # local -A pci_drivers 00:30:50.003 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # net_devs=() 00:30:50.003 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@135 -- # local -ga net_devs 00:30:50.003 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # e810=() 00:30:50.003 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@136 -- # local -ga e810 00:30:50.003 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # x722=() 00:30:50.003 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@137 -- # local -ga x722 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # mlx=() 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@138 -- # local -ga mlx 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:30:50.004 Found 0000:86:00.0 (0x8086 - 0x159b) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:30:50.004 Found 0000:86:00.1 (0x8086 - 0x159b) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:30:50.004 Found net devices under 0000:86:00.0: cvl_0_0 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # [[ up == up ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:30:50.004 Found net devices under 0000:86:00.1: cvl_0_1 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@262 -- # is_hw=yes 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@257 -- # create_target_ns 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@27 -- # local -gA dev_map 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@28 -- # local -g _dev 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # ips=() 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:30:50.004 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772161 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:30:50.005 10.0.0.1 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@11 -- # local val=167772162 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:30:50.005 10.0.0.2 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@38 -- # ping_ips 1 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:50.005 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:30:50.006 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:30:50.006 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.398 ms 00:30:50.006 00:30:50.006 --- 10.0.0.1 ping statistics --- 00:30:50.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.006 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:30:50.006 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:30:50.006 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.215 ms 00:30:50.006 00:30:50.006 --- 10.0.0.2 ping statistics --- 00:30:50.006 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:30:50.006 rtt min/avg/max/mdev = 0.215/0.215/0.215/0.000 ms 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair++ )) 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # return 0 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator0 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=initiator1 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # return 1 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev= 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@169 -- # return 0 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target0 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target0 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:30:50.006 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # get_net_dev target1 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@107 -- # local dev=target1 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@109 -- # return 1 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@168 -- # dev= 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@169 -- # return 0 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@726 -- # xtrace_disable 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@328 -- # nvmfpid=3431762 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF --wait-for-rpc 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # waitforlisten 3431762 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@835 -- # '[' -z 3431762 ']' 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:50.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:50.007 10:48:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:50.007 [2024-11-20 10:48:29.951830] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:30:50.007 [2024-11-20 10:48:29.952723] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:30:50.007 [2024-11-20 10:48:29.952756] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:30:50.007 [2024-11-20 10:48:30.032011] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:30:50.007 [2024-11-20 10:48:30.082843] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:30:50.007 [2024-11-20 10:48:30.082881] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:30:50.007 [2024-11-20 10:48:30.082889] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:30:50.007 [2024-11-20 10:48:30.082894] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:30:50.007 [2024-11-20 10:48:30.082899] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:30:50.007 [2024-11-20 10:48:30.084334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:30:50.007 [2024-11-20 10:48:30.084443] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:30:50.007 [2024-11-20 10:48:30.084546] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:30:50.007 [2024-11-20 10:48:30.084547] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:30:50.007 [2024-11-20 10:48:30.084893] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:30:50.007 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:50.007 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@868 -- # return 0 00:30:50.007 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:30:50.007 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@732 -- # xtrace_disable 00:30:50.007 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:50.007 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:30:50.007 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:30:50.007 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.007 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:50.007 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.007 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:30:50.007 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.007 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:50.007 [2024-11-20 10:48:30.221751] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:30:50.007 [2024-11-20 10:48:30.222499] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:30:50.007 [2024-11-20 10:48:30.222643] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:30:50.007 [2024-11-20 10:48:30.222770] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:30:50.007 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.007 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:30:50.007 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.007 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:50.007 [2024-11-20 10:48:30.233278] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:50.008 Malloc0 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:50.008 [2024-11-20 10:48:30.309568] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=3431785 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=3431787 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:30:50.008 { 00:30:50.008 "params": { 00:30:50.008 "name": "Nvme$subsystem", 00:30:50.008 "trtype": "$TEST_TRANSPORT", 00:30:50.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.008 "adrfam": "ipv4", 00:30:50.008 "trsvcid": "$NVMF_PORT", 00:30:50.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.008 "hdgst": ${hdgst:-false}, 00:30:50.008 "ddgst": ${ddgst:-false} 00:30:50.008 }, 00:30:50.008 "method": "bdev_nvme_attach_controller" 00:30:50.008 } 00:30:50.008 EOF 00:30:50.008 )") 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=3431789 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:30:50.008 { 00:30:50.008 "params": { 00:30:50.008 "name": "Nvme$subsystem", 00:30:50.008 "trtype": "$TEST_TRANSPORT", 00:30:50.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.008 "adrfam": "ipv4", 00:30:50.008 "trsvcid": "$NVMF_PORT", 00:30:50.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.008 "hdgst": ${hdgst:-false}, 00:30:50.008 "ddgst": ${ddgst:-false} 00:30:50.008 }, 00:30:50.008 "method": "bdev_nvme_attach_controller" 00:30:50.008 } 00:30:50.008 EOF 00:30:50.008 )") 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=3431792 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:30:50.008 { 00:30:50.008 "params": { 00:30:50.008 "name": "Nvme$subsystem", 00:30:50.008 "trtype": "$TEST_TRANSPORT", 00:30:50.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.008 "adrfam": "ipv4", 00:30:50.008 "trsvcid": "$NVMF_PORT", 00:30:50.008 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.008 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.008 "hdgst": ${hdgst:-false}, 00:30:50.008 "ddgst": ${ddgst:-false} 00:30:50.008 }, 00:30:50.008 "method": "bdev_nvme_attach_controller" 00:30:50.008 } 00:30:50.008 EOF 00:30:50.008 )") 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # config=() 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # local subsystem config 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:30:50.008 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:30:50.008 { 00:30:50.008 "params": { 00:30:50.008 "name": "Nvme$subsystem", 00:30:50.008 "trtype": "$TEST_TRANSPORT", 00:30:50.008 "traddr": "$NVMF_FIRST_TARGET_IP", 00:30:50.008 "adrfam": "ipv4", 00:30:50.009 "trsvcid": "$NVMF_PORT", 00:30:50.009 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:30:50.009 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:30:50.009 "hdgst": ${hdgst:-false}, 00:30:50.009 "ddgst": ${ddgst:-false} 00:30:50.009 }, 00:30:50.009 "method": "bdev_nvme_attach_controller" 00:30:50.009 } 00:30:50.009 EOF 00:30:50.009 )") 00:30:50.009 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:30:50.009 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 3431785 00:30:50.009 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # cat 00:30:50.009 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:30:50.009 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:30:50.009 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:30:50.009 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:30:50.009 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:30:50.009 "params": { 00:30:50.009 "name": "Nvme1", 00:30:50.009 "trtype": "tcp", 00:30:50.009 "traddr": "10.0.0.2", 00:30:50.009 "adrfam": "ipv4", 00:30:50.009 "trsvcid": "4420", 00:30:50.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:50.009 "hdgst": false, 00:30:50.009 "ddgst": false 00:30:50.009 }, 00:30:50.009 "method": "bdev_nvme_attach_controller" 00:30:50.009 }' 00:30:50.009 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@396 -- # jq . 00:30:50.009 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:30:50.009 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:30:50.009 "params": { 00:30:50.009 "name": "Nvme1", 00:30:50.009 "trtype": "tcp", 00:30:50.009 "traddr": "10.0.0.2", 00:30:50.009 "adrfam": "ipv4", 00:30:50.009 "trsvcid": "4420", 00:30:50.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:50.009 "hdgst": false, 00:30:50.009 "ddgst": false 00:30:50.009 }, 00:30:50.009 "method": "bdev_nvme_attach_controller" 00:30:50.009 }' 00:30:50.009 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:30:50.009 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:30:50.009 "params": { 00:30:50.009 "name": "Nvme1", 00:30:50.009 "trtype": "tcp", 00:30:50.009 "traddr": "10.0.0.2", 00:30:50.009 "adrfam": "ipv4", 00:30:50.009 "trsvcid": "4420", 00:30:50.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:50.009 "hdgst": false, 00:30:50.009 "ddgst": false 00:30:50.009 }, 00:30:50.009 "method": "bdev_nvme_attach_controller" 00:30:50.009 }' 00:30:50.009 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@397 -- # IFS=, 00:30:50.009 10:48:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:30:50.009 "params": { 00:30:50.009 "name": "Nvme1", 00:30:50.009 "trtype": "tcp", 00:30:50.009 "traddr": "10.0.0.2", 00:30:50.009 "adrfam": "ipv4", 00:30:50.009 "trsvcid": "4420", 00:30:50.009 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:30:50.009 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:30:50.009 "hdgst": false, 00:30:50.009 "ddgst": false 00:30:50.009 }, 00:30:50.009 "method": "bdev_nvme_attach_controller" 00:30:50.009 }' 00:30:50.009 [2024-11-20 10:48:30.364119] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:30:50.009 [2024-11-20 10:48:30.364124] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:30:50.009 [2024-11-20 10:48:30.364124] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:30:50.009 [2024-11-20 10:48:30.364178] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib[2024-11-20 10:48:30.364179] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 [2024-11-20 10:48:30.364180] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:30:50.009 .cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:30:50.009 --proc-type=auto ] 00:30:50.009 [2024-11-20 10:48:30.368114] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:30:50.009 [2024-11-20 10:48:30.368181] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:30:50.009 [2024-11-20 10:48:30.561402] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.009 [2024-11-20 10:48:30.603856] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:30:50.009 [2024-11-20 10:48:30.657222] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.009 [2024-11-20 10:48:30.698279] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 7 00:30:50.268 [2024-11-20 10:48:30.754307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.268 [2024-11-20 10:48:30.807215] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:30:50.268 [2024-11-20 10:48:30.812882] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:50.268 [2024-11-20 10:48:30.855507] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:30:50.268 Running I/O for 1 seconds... 00:30:50.268 Running I/O for 1 seconds... 00:30:50.268 Running I/O for 1 seconds... 00:30:50.526 Running I/O for 1 seconds... 00:30:51.464 11682.00 IOPS, 45.63 MiB/s [2024-11-20T09:48:32.195Z] 9810.00 IOPS, 38.32 MiB/s 00:30:51.464 Latency(us) 00:30:51.464 [2024-11-20T09:48:32.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.464 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:30:51.464 Nvme1n1 : 1.01 11728.72 45.82 0.00 0.00 10874.89 3354.82 12420.63 00:30:51.464 [2024-11-20T09:48:32.195Z] =================================================================================================================== 00:30:51.464 [2024-11-20T09:48:32.195Z] Total : 11728.72 45.82 0.00 0.00 10874.89 3354.82 12420.63 00:30:51.464 00:30:51.464 Latency(us) 00:30:51.464 [2024-11-20T09:48:32.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.464 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:30:51.464 Nvme1n1 : 1.01 9881.58 38.60 0.00 0.00 12911.89 1833.45 15853.47 00:30:51.464 [2024-11-20T09:48:32.195Z] =================================================================================================================== 00:30:51.464 [2024-11-20T09:48:32.195Z] Total : 9881.58 38.60 0.00 0.00 12911.89 1833.45 15853.47 00:30:51.464 11215.00 IOPS, 43.81 MiB/s 00:30:51.464 Latency(us) 00:30:51.464 [2024-11-20T09:48:32.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.464 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:30:51.464 Nvme1n1 : 1.00 11310.87 44.18 0.00 0.00 11293.24 2340.57 16602.45 00:30:51.464 [2024-11-20T09:48:32.195Z] =================================================================================================================== 00:30:51.464 [2024-11-20T09:48:32.195Z] Total : 11310.87 44.18 0.00 0.00 11293.24 2340.57 16602.45 00:30:51.464 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 3431787 00:30:51.464 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 3431789 00:30:51.464 253960.00 IOPS, 992.03 MiB/s 00:30:51.464 Latency(us) 00:30:51.464 [2024-11-20T09:48:32.195Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:51.464 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:30:51.464 Nvme1n1 : 1.00 253577.74 990.54 0.00 0.00 502.66 223.33 1490.16 00:30:51.464 [2024-11-20T09:48:32.195Z] =================================================================================================================== 00:30:51.464 [2024-11-20T09:48:32.195Z] Total : 253577.74 990.54 0.00 0.00 502.66 223.33 1490.16 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 3431792 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@563 -- # xtrace_disable 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # nvmfcleanup 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@99 -- # sync 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@102 -- # set +e 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@103 -- # for i in {1..20} 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:30:51.724 rmmod nvme_tcp 00:30:51.724 rmmod nvme_fabrics 00:30:51.724 rmmod nvme_keyring 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@106 -- # set -e 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@107 -- # return 0 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@336 -- # '[' -n 3431762 ']' 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@337 -- # killprocess 3431762 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # '[' -z 3431762 ']' 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # kill -0 3431762 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # uname 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3431762 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3431762' 00:30:51.724 killing process with pid 3431762 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@973 -- # kill 3431762 00:30:51.724 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@978 -- # wait 3431762 00:30:51.984 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:30:51.984 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # nvmf_fini 00:30:51.984 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@264 -- # local dev 00:30:51.984 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@267 -- # remove_target_ns 00:30:51.984 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:51.984 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:51.984 10:48:32 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@268 -- # delete_main_bridge 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@130 -- # return 0 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # _dev=0 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@41 -- # dev_map=() 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/setup.sh@284 -- # iptr 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # iptables-save 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- nvmf/common.sh@542 -- # iptables-restore 00:30:53.891 00:30:53.891 real 0m10.927s 00:30:53.891 user 0m14.984s 00:30:53.891 sys 0m6.655s 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:30:53.891 ************************************ 00:30:53.891 END TEST nvmf_bdev_io_wait 00:30:53.891 ************************************ 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@25 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:53.891 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:30:54.150 ************************************ 00:30:54.150 START TEST nvmf_queue_depth 00:30:54.151 ************************************ 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp --interrupt-mode 00:30:54.151 * Looking for test storage... 00:30:54.151 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lcov --version 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # IFS=.-: 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@336 -- # read -ra ver1 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # IFS=.-: 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@337 -- # read -ra ver2 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@338 -- # local 'op=<' 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@340 -- # ver1_l=2 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@341 -- # ver2_l=1 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@344 -- # case "$op" in 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@345 -- # : 1 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # decimal 1 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=1 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 1 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@365 -- # ver1[v]=1 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # decimal 2 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@353 -- # local d=2 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@355 -- # echo 2 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@366 -- # ver2[v]=2 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@368 -- # return 0 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:30:54.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.151 --rc genhtml_branch_coverage=1 00:30:54.151 --rc genhtml_function_coverage=1 00:30:54.151 --rc genhtml_legend=1 00:30:54.151 --rc geninfo_all_blocks=1 00:30:54.151 --rc geninfo_unexecuted_blocks=1 00:30:54.151 00:30:54.151 ' 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:30:54.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.151 --rc genhtml_branch_coverage=1 00:30:54.151 --rc genhtml_function_coverage=1 00:30:54.151 --rc genhtml_legend=1 00:30:54.151 --rc geninfo_all_blocks=1 00:30:54.151 --rc geninfo_unexecuted_blocks=1 00:30:54.151 00:30:54.151 ' 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:30:54.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.151 --rc genhtml_branch_coverage=1 00:30:54.151 --rc genhtml_function_coverage=1 00:30:54.151 --rc genhtml_legend=1 00:30:54.151 --rc geninfo_all_blocks=1 00:30:54.151 --rc geninfo_unexecuted_blocks=1 00:30:54.151 00:30:54.151 ' 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:30:54.151 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:54.151 --rc genhtml_branch_coverage=1 00:30:54.151 --rc genhtml_function_coverage=1 00:30:54.151 --rc genhtml_legend=1 00:30:54.151 --rc geninfo_all_blocks=1 00:30:54.151 --rc geninfo_unexecuted_blocks=1 00:30:54.151 00:30:54.151 ' 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@15 -- # shopt -s extglob 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.151 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@50 -- # : 0 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@54 -- # have_pci_nics=0 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@296 -- # prepare_net_devs 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@258 -- # local -g is_hw=no 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@260 -- # remove_target_ns 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@125 -- # xtrace_disable 00:30:54.152 10:48:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:00.750 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:00.750 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@131 -- # pci_devs=() 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@131 -- # local -a pci_devs 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@132 -- # pci_net_devs=() 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@133 -- # pci_drivers=() 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@133 -- # local -A pci_drivers 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@135 -- # net_devs=() 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@135 -- # local -ga net_devs 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@136 -- # e810=() 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@136 -- # local -ga e810 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@137 -- # x722=() 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@137 -- # local -ga x722 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@138 -- # mlx=() 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@138 -- # local -ga mlx 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:00.751 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:00.751 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:00.751 Found net devices under 0000:86:00.0: cvl_0_0 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:00.751 Found net devices under 0000:86:00.1: cvl_0_1 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@262 -- # is_hw=yes 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@257 -- # create_target_ns 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@28 -- # local -g _dev 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # ips=() 00:31:00.751 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772161 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:31:00.752 10.0.0.1 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@11 -- # local val=167772162 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:31:00.752 10.0.0.2 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@38 -- # ping_ips 1 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator0 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:00.752 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:00.752 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.356 ms 00:31:00.752 00:31:00.752 --- 10.0.0.1 ping statistics --- 00:31:00.752 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.752 rtt min/avg/max/mdev = 0.356/0.356/0.356/0.000 ms 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:00.752 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:31:00.753 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:00.753 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.204 ms 00:31:00.753 00:31:00.753 --- 10.0.0.2 ping statistics --- 00:31:00.753 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:00.753 rtt min/avg/max/mdev = 0.204/0.204/0.204/0.000 ms 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair++ )) 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@270 -- # return 0 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator0 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=initiator1 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # return 1 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev= 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@169 -- # return 0 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target0 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target0 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # get_net_dev target1 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@107 -- # local dev=target1 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@109 -- # return 1 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@168 -- # dev= 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@169 -- # return 0 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@328 -- # nvmfpid=3435628 00:31:00.753 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@329 -- # waitforlisten 3435628 00:31:00.754 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:31:00.754 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3435628 ']' 00:31:00.754 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:00.754 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:00.754 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:00.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:00.754 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:00.754 10:48:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:00.754 [2024-11-20 10:48:40.960430] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:00.754 [2024-11-20 10:48:40.961313] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:31:00.754 [2024-11-20 10:48:40.961348] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:00.754 [2024-11-20 10:48:41.040481] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.754 [2024-11-20 10:48:41.081639] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:00.754 [2024-11-20 10:48:41.081677] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:00.754 [2024-11-20 10:48:41.081684] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:00.754 [2024-11-20 10:48:41.081690] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:00.754 [2024-11-20 10:48:41.081695] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:00.754 [2024-11-20 10:48:41.082240] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:00.754 [2024-11-20 10:48:41.149260] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:00.754 [2024-11-20 10:48:41.149486] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:00.754 [2024-11-20 10:48:41.222856] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:00.754 Malloc0 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:00.754 [2024-11-20 10:48:41.291062] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=3435847 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 3435847 /var/tmp/bdevperf.sock 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@835 -- # '[' -z 3435847 ']' 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:31:00.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:00.754 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:00.754 [2024-11-20 10:48:41.340937] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:31:00.754 [2024-11-20 10:48:41.340979] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3435847 ] 00:31:00.754 [2024-11-20 10:48:41.396401] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.754 [2024-11-20 10:48:41.436761] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:01.022 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:01.022 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@868 -- # return 0 00:31:01.022 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:31:01.022 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:01.022 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:01.022 NVMe0n1 00:31:01.022 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:01.022 10:48:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:31:01.022 Running I/O for 10 seconds... 00:31:03.331 11942.00 IOPS, 46.65 MiB/s [2024-11-20T09:48:44.996Z] 12099.50 IOPS, 47.26 MiB/s [2024-11-20T09:48:45.928Z] 12292.00 IOPS, 48.02 MiB/s [2024-11-20T09:48:46.861Z] 12334.25 IOPS, 48.18 MiB/s [2024-11-20T09:48:47.793Z] 12424.80 IOPS, 48.53 MiB/s [2024-11-20T09:48:49.167Z] 12462.50 IOPS, 48.68 MiB/s [2024-11-20T09:48:50.100Z] 12530.71 IOPS, 48.95 MiB/s [2024-11-20T09:48:51.034Z] 12538.25 IOPS, 48.98 MiB/s [2024-11-20T09:48:51.968Z] 12535.56 IOPS, 48.97 MiB/s [2024-11-20T09:48:51.968Z] 12585.00 IOPS, 49.16 MiB/s 00:31:11.237 Latency(us) 00:31:11.237 [2024-11-20T09:48:51.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:11.237 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:31:11.237 Verification LBA range: start 0x0 length 0x4000 00:31:11.237 NVMe0n1 : 10.06 12598.42 49.21 0.00 0.00 81016.47 19099.06 53926.77 00:31:11.237 [2024-11-20T09:48:51.968Z] =================================================================================================================== 00:31:11.237 [2024-11-20T09:48:51.968Z] Total : 12598.42 49.21 0.00 0.00 81016.47 19099.06 53926.77 00:31:11.237 { 00:31:11.237 "results": [ 00:31:11.237 { 00:31:11.237 "job": "NVMe0n1", 00:31:11.237 "core_mask": "0x1", 00:31:11.237 "workload": "verify", 00:31:11.237 "status": "finished", 00:31:11.237 "verify_range": { 00:31:11.237 "start": 0, 00:31:11.237 "length": 16384 00:31:11.237 }, 00:31:11.237 "queue_depth": 1024, 00:31:11.237 "io_size": 4096, 00:31:11.237 "runtime": 10.063169, 00:31:11.237 "iops": 12598.417059278245, 00:31:11.237 "mibps": 49.212566637805644, 00:31:11.237 "io_failed": 0, 00:31:11.237 "io_timeout": 0, 00:31:11.237 "avg_latency_us": 81016.46681264133, 00:31:11.237 "min_latency_us": 19099.062857142857, 00:31:11.237 "max_latency_us": 53926.76571428571 00:31:11.237 } 00:31:11.237 ], 00:31:11.237 "core_count": 1 00:31:11.237 } 00:31:11.237 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 3435847 00:31:11.237 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3435847 ']' 00:31:11.237 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3435847 00:31:11.237 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:11.237 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:11.237 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3435847 00:31:11.237 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:11.237 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:11.237 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3435847' 00:31:11.237 killing process with pid 3435847 00:31:11.237 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3435847 00:31:11.237 Received shutdown signal, test time was about 10.000000 seconds 00:31:11.237 00:31:11.237 Latency(us) 00:31:11.237 [2024-11-20T09:48:51.968Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:31:11.237 [2024-11-20T09:48:51.968Z] =================================================================================================================== 00:31:11.237 [2024-11-20T09:48:51.968Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:31:11.237 10:48:51 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3435847 00:31:11.496 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:31:11.496 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:31:11.496 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@335 -- # nvmfcleanup 00:31:11.496 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@99 -- # sync 00:31:11.496 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:31:11.496 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@102 -- # set +e 00:31:11.497 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@103 -- # for i in {1..20} 00:31:11.497 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:31:11.497 rmmod nvme_tcp 00:31:11.497 rmmod nvme_fabrics 00:31:11.497 rmmod nvme_keyring 00:31:11.497 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:31:11.497 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@106 -- # set -e 00:31:11.497 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@107 -- # return 0 00:31:11.497 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@336 -- # '[' -n 3435628 ']' 00:31:11.497 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@337 -- # killprocess 3435628 00:31:11.497 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@954 -- # '[' -z 3435628 ']' 00:31:11.497 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@958 -- # kill -0 3435628 00:31:11.497 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # uname 00:31:11.497 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:11.497 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3435628 00:31:11.497 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:31:11.497 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:31:11.497 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3435628' 00:31:11.497 killing process with pid 3435628 00:31:11.497 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@973 -- # kill 3435628 00:31:11.497 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@978 -- # wait 3435628 00:31:11.756 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:31:11.756 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@342 -- # nvmf_fini 00:31:11.756 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@264 -- # local dev 00:31:11.756 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@267 -- # remove_target_ns 00:31:11.756 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:11.756 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:11.756 10:48:52 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:14.294 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@268 -- # delete_main_bridge 00:31:14.294 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:14.294 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@130 -- # return 0 00:31:14.294 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:14.294 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:31:14.294 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:14.294 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:31:14.294 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:31:14.294 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:14.294 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:31:14.294 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:31:14.294 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:14.294 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:31:14.294 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # _dev=0 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@41 -- # dev_map=() 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/setup.sh@284 -- # iptr 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@542 -- # iptables-save 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- nvmf/common.sh@542 -- # iptables-restore 00:31:14.295 00:31:14.295 real 0m19.760s 00:31:14.295 user 0m22.660s 00:31:14.295 sys 0m6.361s 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:31:14.295 ************************************ 00:31:14.295 END TEST nvmf_queue_depth 00:31:14.295 ************************************ 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:14.295 ************************************ 00:31:14.295 START TEST nvmf_nmic 00:31:14.295 ************************************ 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp --interrupt-mode 00:31:14.295 * Looking for test storage... 00:31:14.295 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lcov --version 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # IFS=.-: 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@336 -- # read -ra ver1 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # IFS=.-: 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@337 -- # read -ra ver2 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@338 -- # local 'op=<' 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@340 -- # ver1_l=2 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@341 -- # ver2_l=1 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@344 -- # case "$op" in 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@345 -- # : 1 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # decimal 1 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=1 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 1 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@365 -- # ver1[v]=1 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # decimal 2 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@353 -- # local d=2 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@355 -- # echo 2 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@366 -- # ver2[v]=2 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@368 -- # return 0 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:14.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.295 --rc genhtml_branch_coverage=1 00:31:14.295 --rc genhtml_function_coverage=1 00:31:14.295 --rc genhtml_legend=1 00:31:14.295 --rc geninfo_all_blocks=1 00:31:14.295 --rc geninfo_unexecuted_blocks=1 00:31:14.295 00:31:14.295 ' 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:14.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.295 --rc genhtml_branch_coverage=1 00:31:14.295 --rc genhtml_function_coverage=1 00:31:14.295 --rc genhtml_legend=1 00:31:14.295 --rc geninfo_all_blocks=1 00:31:14.295 --rc geninfo_unexecuted_blocks=1 00:31:14.295 00:31:14.295 ' 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:14.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.295 --rc genhtml_branch_coverage=1 00:31:14.295 --rc genhtml_function_coverage=1 00:31:14.295 --rc genhtml_legend=1 00:31:14.295 --rc geninfo_all_blocks=1 00:31:14.295 --rc geninfo_unexecuted_blocks=1 00:31:14.295 00:31:14.295 ' 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:14.295 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:14.295 --rc genhtml_branch_coverage=1 00:31:14.295 --rc genhtml_function_coverage=1 00:31:14.295 --rc genhtml_legend=1 00:31:14.295 --rc geninfo_all_blocks=1 00:31:14.295 --rc geninfo_unexecuted_blocks=1 00:31:14.295 00:31:14.295 ' 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@15 -- # shopt -s extglob 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:14.295 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@50 -- # : 0 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@296 -- # prepare_net_devs 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@258 -- # local -g is_hw=no 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@260 -- # remove_target_ns 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@125 -- # xtrace_disable 00:31:14.296 10:48:54 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@131 -- # pci_devs=() 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@131 -- # local -a pci_devs 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@132 -- # pci_net_devs=() 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@133 -- # pci_drivers=() 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@133 -- # local -A pci_drivers 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@135 -- # net_devs=() 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@135 -- # local -ga net_devs 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@136 -- # e810=() 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@136 -- # local -ga e810 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@137 -- # x722=() 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@137 -- # local -ga x722 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@138 -- # mlx=() 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@138 -- # local -ga mlx 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:20.870 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:20.870 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:20.870 Found net devices under 0000:86:00.0: cvl_0_0 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:20.870 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:20.871 Found net devices under 0000:86:00.1: cvl_0_1 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@262 -- # is_hw=yes 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@257 -- # create_target_ns 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@28 -- # local -g _dev 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # ips=() 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772161 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:31:20.871 10.0.0.1 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@11 -- # local val=167772162 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:31:20.871 10.0.0.2 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@38 -- # ping_ips 1 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:31:20.871 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator0 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:20.872 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:20.872 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.465 ms 00:31:20.872 00:31:20.872 --- 10.0.0.1 ping statistics --- 00:31:20.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.872 rtt min/avg/max/mdev = 0.465/0.465/0.465/0.000 ms 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:31:20.872 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:20.872 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:31:20.872 00:31:20.872 --- 10.0.0.2 ping statistics --- 00:31:20.872 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:20.872 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair++ )) 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@270 -- # return 0 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator0 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=initiator1 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # return 1 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev= 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@169 -- # return 0 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target0 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target0 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:20.872 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # get_net_dev target1 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@107 -- # local dev=target1 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@109 -- # return 1 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@168 -- # dev= 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@169 -- # return 0 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@328 -- # nvmfpid=3441002 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@329 -- # waitforlisten 3441002 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@835 -- # '[' -z 3441002 ']' 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:20.873 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:20.873 10:49:00 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:20.873 [2024-11-20 10:49:00.833092] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:20.873 [2024-11-20 10:49:00.833999] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:31:20.873 [2024-11-20 10:49:00.834032] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:20.873 [2024-11-20 10:49:00.912518] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:20.873 [2024-11-20 10:49:00.955837] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:20.873 [2024-11-20 10:49:00.955874] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:20.873 [2024-11-20 10:49:00.955881] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:20.873 [2024-11-20 10:49:00.955887] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:20.873 [2024-11-20 10:49:00.955892] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:20.873 [2024-11-20 10:49:00.957465] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:20.873 [2024-11-20 10:49:00.957502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:20.873 [2024-11-20 10:49:00.957606] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:20.873 [2024-11-20 10:49:00.957607] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:20.873 [2024-11-20 10:49:01.024505] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:20.873 [2024-11-20 10:49:01.025352] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:20.873 [2024-11-20 10:49:01.025481] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:20.873 [2024-11-20 10:49:01.025793] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:20.873 [2024-11-20 10:49:01.025856] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@868 -- # return 0 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:20.873 [2024-11-20 10:49:01.090496] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:20.873 Malloc0 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:20.873 [2024-11-20 10:49:01.174652] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:31:20.873 test case1: single bdev can't be used in multiple subsystems 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.873 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:20.873 [2024-11-20 10:49:01.210107] bdev.c:8199:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:31:20.873 [2024-11-20 10:49:01.210126] subsystem.c:2150:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:31:20.873 [2024-11-20 10:49:01.210138] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:31:20.873 request: 00:31:20.873 { 00:31:20.873 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:31:20.873 "namespace": { 00:31:20.873 "bdev_name": "Malloc0", 00:31:20.873 "no_auto_visible": false 00:31:20.874 }, 00:31:20.874 "method": "nvmf_subsystem_add_ns", 00:31:20.874 "req_id": 1 00:31:20.874 } 00:31:20.874 Got JSON-RPC error response 00:31:20.874 response: 00:31:20.874 { 00:31:20.874 "code": -32602, 00:31:20.874 "message": "Invalid parameters" 00:31:20.874 } 00:31:20.874 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:31:20.874 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:31:20.874 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:31:20.874 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:31:20.874 Adding namespace failed - expected result. 00:31:20.874 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:31:20.874 test case2: host connect to nvmf target in multiple paths 00:31:20.874 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:31:20.874 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@563 -- # xtrace_disable 00:31:20.874 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:20.874 [2024-11-20 10:49:01.222199] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:31:20.874 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:31:20.874 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:20.874 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:31:21.132 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:31:21.132 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1202 -- # local i=0 00:31:21.132 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:21.132 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:31:21.132 10:49:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1209 -- # sleep 2 00:31:23.031 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:23.031 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:23.031 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:23.031 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:31:23.031 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:23.031 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1212 -- # return 0 00:31:23.031 10:49:03 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:23.031 [global] 00:31:23.031 thread=1 00:31:23.031 invalidate=1 00:31:23.031 rw=write 00:31:23.031 time_based=1 00:31:23.031 runtime=1 00:31:23.031 ioengine=libaio 00:31:23.031 direct=1 00:31:23.031 bs=4096 00:31:23.031 iodepth=1 00:31:23.031 norandommap=0 00:31:23.031 numjobs=1 00:31:23.031 00:31:23.031 verify_dump=1 00:31:23.031 verify_backlog=512 00:31:23.031 verify_state_save=0 00:31:23.031 do_verify=1 00:31:23.031 verify=crc32c-intel 00:31:23.031 [job0] 00:31:23.031 filename=/dev/nvme0n1 00:31:23.289 Could not set queue depth (nvme0n1) 00:31:23.289 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:23.289 fio-3.35 00:31:23.289 Starting 1 thread 00:31:24.663 00:31:24.663 job0: (groupid=0, jobs=1): err= 0: pid=3441612: Wed Nov 20 10:49:05 2024 00:31:24.663 read: IOPS=2477, BW=9910KiB/s (10.1MB/s)(9920KiB/1001msec) 00:31:24.663 slat (nsec): min=6287, max=26375, avg=6991.44, stdev=867.02 00:31:24.663 clat (usec): min=192, max=454, avg=229.98, stdev=22.82 00:31:24.663 lat (usec): min=198, max=461, avg=236.97, stdev=22.80 00:31:24.663 clat percentiles (usec): 00:31:24.663 | 1.00th=[ 196], 5.00th=[ 198], 10.00th=[ 200], 20.00th=[ 204], 00:31:24.663 | 30.00th=[ 206], 40.00th=[ 217], 50.00th=[ 245], 60.00th=[ 247], 00:31:24.663 | 70.00th=[ 247], 80.00th=[ 249], 90.00th=[ 253], 95.00th=[ 255], 00:31:24.663 | 99.00th=[ 260], 99.50th=[ 262], 99.90th=[ 297], 99.95th=[ 302], 00:31:24.663 | 99.99th=[ 453] 00:31:24.663 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:24.664 slat (usec): min=9, max=28845, avg=21.80, stdev=569.90 00:31:24.664 clat (usec): min=114, max=320, avg=135.10, stdev= 8.56 00:31:24.664 lat (usec): min=127, max=29163, avg=156.90, stdev=573.58 00:31:24.664 clat percentiles (usec): 00:31:24.664 | 1.00th=[ 124], 5.00th=[ 127], 10.00th=[ 129], 20.00th=[ 131], 00:31:24.664 | 30.00th=[ 133], 40.00th=[ 135], 50.00th=[ 135], 60.00th=[ 137], 00:31:24.664 | 70.00th=[ 137], 80.00th=[ 139], 90.00th=[ 141], 95.00th=[ 143], 00:31:24.664 | 99.00th=[ 161], 99.50th=[ 184], 99.90th=[ 245], 99.95th=[ 318], 00:31:24.664 | 99.99th=[ 322] 00:31:24.664 bw ( KiB/s): min=12288, max=12288, per=100.00%, avg=12288.00, stdev= 0.00, samples=1 00:31:24.664 iops : min= 3072, max= 3072, avg=3072.00, stdev= 0.00, samples=1 00:31:24.664 lat (usec) : 250=91.37%, 500=8.63% 00:31:24.664 cpu : usr=2.40%, sys=4.50%, ctx=5044, majf=0, minf=1 00:31:24.664 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:24.664 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.664 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:24.664 issued rwts: total=2480,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:24.664 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:24.664 00:31:24.664 Run status group 0 (all jobs): 00:31:24.664 READ: bw=9910KiB/s (10.1MB/s), 9910KiB/s-9910KiB/s (10.1MB/s-10.1MB/s), io=9920KiB (10.2MB), run=1001-1001msec 00:31:24.664 WRITE: bw=9.99MiB/s (10.5MB/s), 9.99MiB/s-9.99MiB/s (10.5MB/s-10.5MB/s), io=10.0MiB (10.5MB), run=1001-1001msec 00:31:24.664 00:31:24.664 Disk stats (read/write): 00:31:24.664 nvme0n1: ios=2110/2560, merge=0/0, ticks=1457/327, in_queue=1784, util=98.50% 00:31:24.664 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:24.664 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:31:24.664 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:24.664 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1223 -- # local i=0 00:31:24.664 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:24.664 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:24.664 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:24.664 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:24.664 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1235 -- # return 0 00:31:24.664 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:31:24.664 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:31:24.664 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@335 -- # nvmfcleanup 00:31:24.664 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@99 -- # sync 00:31:24.664 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:31:24.664 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@102 -- # set +e 00:31:24.664 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@103 -- # for i in {1..20} 00:31:24.664 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:31:24.664 rmmod nvme_tcp 00:31:24.922 rmmod nvme_fabrics 00:31:24.922 rmmod nvme_keyring 00:31:24.922 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:31:24.922 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@106 -- # set -e 00:31:24.923 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@107 -- # return 0 00:31:24.923 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@336 -- # '[' -n 3441002 ']' 00:31:24.923 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@337 -- # killprocess 3441002 00:31:24.923 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@954 -- # '[' -z 3441002 ']' 00:31:24.923 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@958 -- # kill -0 3441002 00:31:24.923 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # uname 00:31:24.923 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:24.923 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3441002 00:31:24.923 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:24.923 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:24.923 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3441002' 00:31:24.923 killing process with pid 3441002 00:31:24.923 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@973 -- # kill 3441002 00:31:24.923 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@978 -- # wait 3441002 00:31:25.182 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:31:25.182 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@342 -- # nvmf_fini 00:31:25.182 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@264 -- # local dev 00:31:25.182 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@267 -- # remove_target_ns 00:31:25.182 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:25.182 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:25.182 10:49:05 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:27.088 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@268 -- # delete_main_bridge 00:31:27.088 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@130 -- # return 0 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # _dev=0 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@41 -- # dev_map=() 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/setup.sh@284 -- # iptr 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@542 -- # iptables-save 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- nvmf/common.sh@542 -- # iptables-restore 00:31:27.089 00:31:27.089 real 0m13.257s 00:31:27.089 user 0m23.963s 00:31:27.089 sys 0m6.327s 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:31:27.089 ************************************ 00:31:27.089 END TEST nvmf_nmic 00:31:27.089 ************************************ 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:27.089 ************************************ 00:31:27.089 START TEST nvmf_fio_target 00:31:27.089 ************************************ 00:31:27.089 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp --interrupt-mode 00:31:27.349 * Looking for test storage... 00:31:27.349 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lcov --version 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # IFS=.-: 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@336 -- # read -ra ver1 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # IFS=.-: 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@337 -- # read -ra ver2 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@338 -- # local 'op=<' 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@340 -- # ver1_l=2 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@341 -- # ver2_l=1 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@344 -- # case "$op" in 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@345 -- # : 1 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # decimal 1 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=1 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 1 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@365 -- # ver1[v]=1 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # decimal 2 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@353 -- # local d=2 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@355 -- # echo 2 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@366 -- # ver2[v]=2 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@368 -- # return 0 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:27.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.349 --rc genhtml_branch_coverage=1 00:31:27.349 --rc genhtml_function_coverage=1 00:31:27.349 --rc genhtml_legend=1 00:31:27.349 --rc geninfo_all_blocks=1 00:31:27.349 --rc geninfo_unexecuted_blocks=1 00:31:27.349 00:31:27.349 ' 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:27.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.349 --rc genhtml_branch_coverage=1 00:31:27.349 --rc genhtml_function_coverage=1 00:31:27.349 --rc genhtml_legend=1 00:31:27.349 --rc geninfo_all_blocks=1 00:31:27.349 --rc geninfo_unexecuted_blocks=1 00:31:27.349 00:31:27.349 ' 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:27.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.349 --rc genhtml_branch_coverage=1 00:31:27.349 --rc genhtml_function_coverage=1 00:31:27.349 --rc genhtml_legend=1 00:31:27.349 --rc geninfo_all_blocks=1 00:31:27.349 --rc geninfo_unexecuted_blocks=1 00:31:27.349 00:31:27.349 ' 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:27.349 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:27.349 --rc genhtml_branch_coverage=1 00:31:27.349 --rc genhtml_function_coverage=1 00:31:27.349 --rc genhtml_legend=1 00:31:27.349 --rc geninfo_all_blocks=1 00:31:27.349 --rc geninfo_unexecuted_blocks=1 00:31:27.349 00:31:27.349 ' 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:31:27.349 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:27.350 10:49:07 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@15 -- # shopt -s extglob 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@50 -- # : 0 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@296 -- # prepare_net_devs 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@258 -- # local -g is_hw=no 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@260 -- # remove_target_ns 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@125 -- # xtrace_disable 00:31:27.350 10:49:08 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@131 -- # pci_devs=() 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@131 -- # local -a pci_devs 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@132 -- # pci_net_devs=() 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@133 -- # pci_drivers=() 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@133 -- # local -A pci_drivers 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@135 -- # net_devs=() 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@135 -- # local -ga net_devs 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@136 -- # e810=() 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@136 -- # local -ga e810 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@137 -- # x722=() 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@137 -- # local -ga x722 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@138 -- # mlx=() 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@138 -- # local -ga mlx 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:31:33.921 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:31:33.922 Found 0000:86:00.0 (0x8086 - 0x159b) 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:31:33.922 Found 0000:86:00.1 (0x8086 - 0x159b) 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:31:33.922 Found net devices under 0000:86:00.0: cvl_0_0 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@234 -- # [[ up == up ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:31:33.922 Found net devices under 0000:86:00.1: cvl_0_1 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@262 -- # is_hw=yes 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@257 -- # create_target_ns 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@27 -- # local -gA dev_map 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@28 -- # local -g _dev 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # ips=() 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:31:33.922 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772161 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:31:33.923 10.0.0.1 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@11 -- # local val=167772162 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:31:33.923 10.0.0.2 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@38 -- # ping_ips 1 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:31:33.923 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:31:33.923 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.461 ms 00:31:33.923 00:31:33.923 --- 10.0.0.1 ping statistics --- 00:31:33.923 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.923 rtt min/avg/max/mdev = 0.461/0.461/0.461/0.000 ms 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:33.923 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:31:33.924 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:31:33.924 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.226 ms 00:31:33.924 00:31:33.924 --- 10.0.0.2 ping statistics --- 00:31:33.924 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:31:33.924 rtt min/avg/max/mdev = 0.226/0.226/0.226/0.000 ms 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair++ )) 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@270 -- # return 0 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator0 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=initiator1 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # return 1 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev= 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@169 -- # return 0 00:31:33.924 10:49:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:31:33.924 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:31:33.924 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:31:33.924 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:31:33.924 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:31:33.924 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:33.924 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:33.924 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target0 00:31:33.924 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target0 00:31:33.924 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:31:33.924 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:31:33.924 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:31:33.924 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:31:33.924 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:31:33.924 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:31:33.924 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:31:33.924 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # get_net_dev target1 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@107 -- # local dev=target1 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@109 -- # return 1 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@168 -- # dev= 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@169 -- # return 0 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@726 -- # xtrace_disable 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@328 -- # nvmfpid=3445390 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@329 -- # waitforlisten 3445390 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0xF 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@835 -- # '[' -z 3445390 ']' 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@840 -- # local max_retries=100 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:31:33.925 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@844 -- # xtrace_disable 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:33.925 [2024-11-20 10:49:14.123331] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:31:33.925 [2024-11-20 10:49:14.124223] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:31:33.925 [2024-11-20 10:49:14.124256] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:31:33.925 [2024-11-20 10:49:14.199278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:31:33.925 [2024-11-20 10:49:14.240884] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:31:33.925 [2024-11-20 10:49:14.240921] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:31:33.925 [2024-11-20 10:49:14.240929] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:31:33.925 [2024-11-20 10:49:14.240934] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:31:33.925 [2024-11-20 10:49:14.240940] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:31:33.925 [2024-11-20 10:49:14.242334] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:31:33.925 [2024-11-20 10:49:14.242442] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:31:33.925 [2024-11-20 10:49:14.242552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:31:33.925 [2024-11-20 10:49:14.242552] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:31:33.925 [2024-11-20 10:49:14.308480] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:31:33.925 [2024-11-20 10:49:14.309125] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:31:33.925 [2024-11-20 10:49:14.309451] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:31:33.925 [2024-11-20 10:49:14.309936] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:31:33.925 [2024-11-20 10:49:14.309973] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@868 -- # return 0 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@732 -- # xtrace_disable 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:31:33.925 [2024-11-20 10:49:14.543193] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:31:33.925 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:34.184 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:31:34.184 10:49:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:34.443 10:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:31:34.443 10:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:34.702 10:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:31:34.702 10:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:34.961 10:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:31:34.961 10:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:31:34.961 10:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:35.219 10:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:31:35.219 10:49:15 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:35.476 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:31:35.476 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:31:35.734 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:31:35.734 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:31:35.734 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:31:35.993 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:35.993 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:31:36.251 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:31:36.251 10:49:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:31:36.507 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:31:36.507 [2024-11-20 10:49:17.175119] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:31:36.507 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:31:36.763 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:31:37.020 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:31:37.277 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:31:37.278 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1202 -- # local i=0 00:31:37.278 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:31:37.278 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1204 -- # [[ -n 4 ]] 00:31:37.278 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1205 -- # nvme_device_counter=4 00:31:37.278 10:49:17 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1209 -- # sleep 2 00:31:39.800 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:31:39.800 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:31:39.800 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:31:39.800 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1211 -- # nvme_devices=4 00:31:39.800 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:31:39.800 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1212 -- # return 0 00:31:39.800 10:49:19 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:31:39.800 [global] 00:31:39.800 thread=1 00:31:39.800 invalidate=1 00:31:39.800 rw=write 00:31:39.800 time_based=1 00:31:39.800 runtime=1 00:31:39.800 ioengine=libaio 00:31:39.800 direct=1 00:31:39.800 bs=4096 00:31:39.800 iodepth=1 00:31:39.800 norandommap=0 00:31:39.800 numjobs=1 00:31:39.800 00:31:39.800 verify_dump=1 00:31:39.800 verify_backlog=512 00:31:39.800 verify_state_save=0 00:31:39.800 do_verify=1 00:31:39.800 verify=crc32c-intel 00:31:39.800 [job0] 00:31:39.800 filename=/dev/nvme0n1 00:31:39.800 [job1] 00:31:39.800 filename=/dev/nvme0n2 00:31:39.800 [job2] 00:31:39.800 filename=/dev/nvme0n3 00:31:39.800 [job3] 00:31:39.800 filename=/dev/nvme0n4 00:31:39.800 Could not set queue depth (nvme0n1) 00:31:39.800 Could not set queue depth (nvme0n2) 00:31:39.800 Could not set queue depth (nvme0n3) 00:31:39.800 Could not set queue depth (nvme0n4) 00:31:39.800 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:39.800 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:39.800 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:39.800 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:39.800 fio-3.35 00:31:39.800 Starting 4 threads 00:31:41.173 00:31:41.173 job0: (groupid=0, jobs=1): err= 0: pid=3446587: Wed Nov 20 10:49:21 2024 00:31:41.173 read: IOPS=20, BW=83.7KiB/s (85.7kB/s)(84.0KiB/1004msec) 00:31:41.173 slat (nsec): min=9593, max=23915, avg=19252.38, stdev=5594.93 00:31:41.173 clat (usec): min=40864, max=41853, avg=41010.99, stdev=204.15 00:31:41.173 lat (usec): min=40876, max=41863, avg=41030.25, stdev=201.98 00:31:41.173 clat percentiles (usec): 00:31:41.173 | 1.00th=[40633], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:41.173 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:41.173 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:41.173 | 99.00th=[41681], 99.50th=[41681], 99.90th=[41681], 99.95th=[41681], 00:31:41.173 | 99.99th=[41681] 00:31:41.173 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:31:41.173 slat (usec): min=9, max=13047, avg=37.28, stdev=576.11 00:31:41.173 clat (usec): min=166, max=374, avg=237.45, stdev=18.37 00:31:41.173 lat (usec): min=179, max=13421, avg=274.72, stdev=582.37 00:31:41.173 clat percentiles (usec): 00:31:41.173 | 1.00th=[ 176], 5.00th=[ 190], 10.00th=[ 221], 20.00th=[ 237], 00:31:41.173 | 30.00th=[ 241], 40.00th=[ 241], 50.00th=[ 241], 60.00th=[ 243], 00:31:41.173 | 70.00th=[ 243], 80.00th=[ 245], 90.00th=[ 247], 95.00th=[ 249], 00:31:41.173 | 99.00th=[ 277], 99.50th=[ 310], 99.90th=[ 375], 99.95th=[ 375], 00:31:41.173 | 99.99th=[ 375] 00:31:41.173 bw ( KiB/s): min= 4096, max= 4096, per=36.83%, avg=4096.00, stdev= 0.00, samples=1 00:31:41.173 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:41.173 lat (usec) : 250=92.68%, 500=3.38% 00:31:41.173 lat (msec) : 50=3.94% 00:31:41.173 cpu : usr=0.20%, sys=0.70%, ctx=535, majf=0, minf=1 00:31:41.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.173 issued rwts: total=21,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:41.173 job1: (groupid=0, jobs=1): err= 0: pid=3446608: Wed Nov 20 10:49:21 2024 00:31:41.173 read: IOPS=22, BW=91.6KiB/s (93.8kB/s)(92.0KiB/1004msec) 00:31:41.173 slat (nsec): min=9222, max=24004, avg=21042.39, stdev=3692.46 00:31:41.173 clat (usec): min=412, max=41960, avg=39250.91, stdev=8469.09 00:31:41.173 lat (usec): min=422, max=41981, avg=39271.95, stdev=8471.52 00:31:41.173 clat percentiles (usec): 00:31:41.173 | 1.00th=[ 412], 5.00th=[40633], 10.00th=[40633], 20.00th=[41157], 00:31:41.173 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:41.173 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:41.173 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:41.173 | 99.99th=[42206] 00:31:41.173 write: IOPS=509, BW=2040KiB/s (2089kB/s)(2048KiB/1004msec); 0 zone resets 00:31:41.173 slat (nsec): min=10228, max=40240, avg=11543.64, stdev=2260.76 00:31:41.173 clat (usec): min=142, max=313, avg=181.68, stdev=26.75 00:31:41.173 lat (usec): min=153, max=332, avg=193.22, stdev=27.00 00:31:41.173 clat percentiles (usec): 00:31:41.173 | 1.00th=[ 145], 5.00th=[ 149], 10.00th=[ 153], 20.00th=[ 159], 00:31:41.173 | 30.00th=[ 163], 40.00th=[ 169], 50.00th=[ 176], 60.00th=[ 184], 00:31:41.173 | 70.00th=[ 192], 80.00th=[ 208], 90.00th=[ 221], 95.00th=[ 227], 00:31:41.173 | 99.00th=[ 249], 99.50th=[ 260], 99.90th=[ 314], 99.95th=[ 314], 00:31:41.173 | 99.99th=[ 314] 00:31:41.173 bw ( KiB/s): min= 4096, max= 4096, per=36.83%, avg=4096.00, stdev= 0.00, samples=1 00:31:41.173 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:41.173 lat (usec) : 250=94.77%, 500=1.12% 00:31:41.173 lat (msec) : 50=4.11% 00:31:41.173 cpu : usr=0.30%, sys=1.00%, ctx=535, majf=0, minf=1 00:31:41.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.173 issued rwts: total=23,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:41.173 job2: (groupid=0, jobs=1): err= 0: pid=3446632: Wed Nov 20 10:49:21 2024 00:31:41.173 read: IOPS=1024, BW=4096KiB/s (4194kB/s)(4096KiB/1000msec) 00:31:41.173 slat (nsec): min=6672, max=28199, avg=7714.55, stdev=1729.02 00:31:41.173 clat (usec): min=196, max=41989, avg=746.90, stdev=4577.67 00:31:41.173 lat (usec): min=203, max=42000, avg=754.62, stdev=4578.14 00:31:41.173 clat percentiles (usec): 00:31:41.173 | 1.00th=[ 212], 5.00th=[ 215], 10.00th=[ 219], 20.00th=[ 221], 00:31:41.173 | 30.00th=[ 223], 40.00th=[ 225], 50.00th=[ 227], 60.00th=[ 229], 00:31:41.173 | 70.00th=[ 231], 80.00th=[ 235], 90.00th=[ 241], 95.00th=[ 249], 00:31:41.173 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41681], 99.95th=[42206], 00:31:41.173 | 99.99th=[42206] 00:31:41.173 write: IOPS=1261, BW=5044KiB/s (5165kB/s)(5044KiB/1000msec); 0 zone resets 00:31:41.173 slat (nsec): min=9308, max=40670, avg=10502.29, stdev=1210.58 00:31:41.173 clat (usec): min=138, max=388, avg=165.97, stdev=21.52 00:31:41.173 lat (usec): min=149, max=398, avg=176.48, stdev=21.78 00:31:41.173 clat percentiles (usec): 00:31:41.173 | 1.00th=[ 143], 5.00th=[ 145], 10.00th=[ 147], 20.00th=[ 149], 00:31:41.173 | 30.00th=[ 151], 40.00th=[ 153], 50.00th=[ 157], 60.00th=[ 167], 00:31:41.173 | 70.00th=[ 178], 80.00th=[ 188], 90.00th=[ 196], 95.00th=[ 202], 00:31:41.173 | 99.00th=[ 221], 99.50th=[ 233], 99.90th=[ 306], 99.95th=[ 388], 00:31:41.173 | 99.99th=[ 388] 00:31:41.173 bw ( KiB/s): min= 4096, max= 4096, per=36.83%, avg=4096.00, stdev= 0.00, samples=1 00:31:41.173 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:41.173 lat (usec) : 250=98.12%, 500=1.27%, 750=0.04% 00:31:41.173 lat (msec) : 50=0.57% 00:31:41.173 cpu : usr=0.50%, sys=2.70%, ctx=2285, majf=0, minf=1 00:31:41.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.173 issued rwts: total=1024,1261,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:41.173 job3: (groupid=0, jobs=1): err= 0: pid=3446642: Wed Nov 20 10:49:21 2024 00:31:41.173 read: IOPS=23, BW=95.4KiB/s (97.7kB/s)(96.0KiB/1006msec) 00:31:41.173 slat (nsec): min=7585, max=24301, avg=21513.79, stdev=4792.01 00:31:41.173 clat (usec): min=221, max=42021, avg=37595.10, stdev=11509.77 00:31:41.173 lat (usec): min=231, max=42043, avg=37616.61, stdev=11511.19 00:31:41.173 clat percentiles (usec): 00:31:41.173 | 1.00th=[ 223], 5.00th=[ 245], 10.00th=[40633], 20.00th=[40633], 00:31:41.173 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:41.173 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:41.173 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:41.173 | 99.99th=[42206] 00:31:41.173 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:31:41.173 slat (nsec): min=10542, max=38492, avg=11644.58, stdev=1679.24 00:31:41.173 clat (usec): min=150, max=395, avg=186.69, stdev=20.24 00:31:41.173 lat (usec): min=162, max=406, avg=198.34, stdev=20.70 00:31:41.173 clat percentiles (usec): 00:31:41.173 | 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 167], 20.00th=[ 172], 00:31:41.173 | 30.00th=[ 178], 40.00th=[ 182], 50.00th=[ 186], 60.00th=[ 190], 00:31:41.173 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 212], 00:31:41.173 | 99.00th=[ 233], 99.50th=[ 347], 99.90th=[ 396], 99.95th=[ 396], 00:31:41.173 | 99.99th=[ 396] 00:31:41.173 bw ( KiB/s): min= 4096, max= 4096, per=36.83%, avg=4096.00, stdev= 0.00, samples=1 00:31:41.173 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:41.173 lat (usec) : 250=95.34%, 500=0.56% 00:31:41.173 lat (msec) : 50=4.10% 00:31:41.173 cpu : usr=0.40%, sys=0.40%, ctx=537, majf=0, minf=1 00:31:41.173 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:41.173 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.173 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:41.173 issued rwts: total=24,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:41.173 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:41.173 00:31:41.173 Run status group 0 (all jobs): 00:31:41.173 READ: bw=4342KiB/s (4446kB/s), 83.7KiB/s-4096KiB/s (85.7kB/s-4194kB/s), io=4368KiB (4473kB), run=1000-1006msec 00:31:41.173 WRITE: bw=10.9MiB/s (11.4MB/s), 2036KiB/s-5044KiB/s (2085kB/s-5165kB/s), io=10.9MiB (11.5MB), run=1000-1006msec 00:31:41.173 00:31:41.173 Disk stats (read/write): 00:31:41.173 nvme0n1: ios=42/512, merge=0/0, ticks=1522/120, in_queue=1642, util=87.37% 00:31:41.173 nvme0n2: ios=68/512, merge=0/0, ticks=758/86, in_queue=844, util=85.36% 00:31:41.174 nvme0n3: ios=569/778, merge=0/0, ticks=724/139, in_queue=863, util=89.31% 00:31:41.174 nvme0n4: ios=78/512, merge=0/0, ticks=1473/93, in_queue=1566, util=97.22% 00:31:41.174 10:49:21 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:31:41.174 [global] 00:31:41.174 thread=1 00:31:41.174 invalidate=1 00:31:41.174 rw=randwrite 00:31:41.174 time_based=1 00:31:41.174 runtime=1 00:31:41.174 ioengine=libaio 00:31:41.174 direct=1 00:31:41.174 bs=4096 00:31:41.174 iodepth=1 00:31:41.174 norandommap=0 00:31:41.174 numjobs=1 00:31:41.174 00:31:41.174 verify_dump=1 00:31:41.174 verify_backlog=512 00:31:41.174 verify_state_save=0 00:31:41.174 do_verify=1 00:31:41.174 verify=crc32c-intel 00:31:41.174 [job0] 00:31:41.174 filename=/dev/nvme0n1 00:31:41.174 [job1] 00:31:41.174 filename=/dev/nvme0n2 00:31:41.174 [job2] 00:31:41.174 filename=/dev/nvme0n3 00:31:41.174 [job3] 00:31:41.174 filename=/dev/nvme0n4 00:31:41.174 Could not set queue depth (nvme0n1) 00:31:41.174 Could not set queue depth (nvme0n2) 00:31:41.174 Could not set queue depth (nvme0n3) 00:31:41.174 Could not set queue depth (nvme0n4) 00:31:41.174 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:41.174 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:41.174 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:41.174 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:41.174 fio-3.35 00:31:41.174 Starting 4 threads 00:31:42.545 00:31:42.545 job0: (groupid=0, jobs=1): err= 0: pid=3447025: Wed Nov 20 10:49:23 2024 00:31:42.545 read: IOPS=2165, BW=8663KiB/s (8871kB/s)(8672KiB/1001msec) 00:31:42.545 slat (nsec): min=6209, max=29332, avg=7057.79, stdev=909.06 00:31:42.545 clat (usec): min=202, max=441, avg=242.45, stdev=13.88 00:31:42.545 lat (usec): min=209, max=448, avg=249.51, stdev=13.87 00:31:42.545 clat percentiles (usec): 00:31:42.545 | 1.00th=[ 212], 5.00th=[ 221], 10.00th=[ 225], 20.00th=[ 231], 00:31:42.545 | 30.00th=[ 239], 40.00th=[ 243], 50.00th=[ 245], 60.00th=[ 247], 00:31:42.545 | 70.00th=[ 249], 80.00th=[ 251], 90.00th=[ 255], 95.00th=[ 258], 00:31:42.545 | 99.00th=[ 265], 99.50th=[ 273], 99.90th=[ 408], 99.95th=[ 412], 00:31:42.545 | 99.99th=[ 441] 00:31:42.545 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:42.545 slat (nsec): min=8867, max=46548, avg=9693.31, stdev=1130.90 00:31:42.545 clat (usec): min=137, max=525, avg=166.02, stdev=20.83 00:31:42.545 lat (usec): min=147, max=535, avg=175.71, stdev=20.96 00:31:42.545 clat percentiles (usec): 00:31:42.545 | 1.00th=[ 147], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 155], 00:31:42.545 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 163], 00:31:42.545 | 70.00th=[ 167], 80.00th=[ 172], 90.00th=[ 180], 95.00th=[ 208], 00:31:42.545 | 99.00th=[ 221], 99.50th=[ 233], 99.90th=[ 469], 99.95th=[ 523], 00:31:42.545 | 99.99th=[ 529] 00:31:42.545 bw ( KiB/s): min=10936, max=10936, per=44.77%, avg=10936.00, stdev= 0.00, samples=1 00:31:42.545 iops : min= 2734, max= 2734, avg=2734.00, stdev= 0.00, samples=1 00:31:42.545 lat (usec) : 250=87.86%, 500=12.10%, 750=0.04% 00:31:42.545 cpu : usr=2.50%, sys=4.00%, ctx=4728, majf=0, minf=1 00:31:42.545 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.545 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.545 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.545 issued rwts: total=2168,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.545 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:42.545 job1: (groupid=0, jobs=1): err= 0: pid=3447038: Wed Nov 20 10:49:23 2024 00:31:42.545 read: IOPS=21, BW=87.6KiB/s (89.7kB/s)(88.0KiB/1005msec) 00:31:42.546 slat (nsec): min=9112, max=30275, avg=20164.45, stdev=5756.81 00:31:42.546 clat (usec): min=40783, max=41125, avg=40964.90, stdev=78.54 00:31:42.546 lat (usec): min=40792, max=41148, avg=40985.07, stdev=77.47 00:31:42.546 clat percentiles (usec): 00:31:42.546 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:42.546 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:42.546 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:42.546 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:42.546 | 99.99th=[41157] 00:31:42.546 write: IOPS=509, BW=2038KiB/s (2087kB/s)(2048KiB/1005msec); 0 zone resets 00:31:42.546 slat (nsec): min=8978, max=47464, avg=10420.10, stdev=2196.50 00:31:42.546 clat (usec): min=145, max=378, avg=187.70, stdev=20.56 00:31:42.546 lat (usec): min=155, max=388, avg=198.12, stdev=21.10 00:31:42.546 clat percentiles (usec): 00:31:42.546 | 1.00th=[ 157], 5.00th=[ 163], 10.00th=[ 167], 20.00th=[ 174], 00:31:42.546 | 30.00th=[ 178], 40.00th=[ 184], 50.00th=[ 186], 60.00th=[ 190], 00:31:42.546 | 70.00th=[ 194], 80.00th=[ 198], 90.00th=[ 206], 95.00th=[ 217], 00:31:42.546 | 99.00th=[ 247], 99.50th=[ 343], 99.90th=[ 379], 99.95th=[ 379], 00:31:42.546 | 99.99th=[ 379] 00:31:42.546 bw ( KiB/s): min= 4096, max= 4096, per=16.77%, avg=4096.00, stdev= 0.00, samples=1 00:31:42.546 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:42.546 lat (usec) : 250=95.13%, 500=0.75% 00:31:42.546 lat (msec) : 50=4.12% 00:31:42.546 cpu : usr=0.10%, sys=0.70%, ctx=534, majf=0, minf=1 00:31:42.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.546 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:42.546 job2: (groupid=0, jobs=1): err= 0: pid=3447053: Wed Nov 20 10:49:23 2024 00:31:42.546 read: IOPS=21, BW=87.5KiB/s (89.6kB/s)(88.0KiB/1006msec) 00:31:42.546 slat (nsec): min=9735, max=24768, avg=23341.27, stdev=3060.37 00:31:42.546 clat (usec): min=40887, max=41174, avg=40976.48, stdev=60.58 00:31:42.546 lat (usec): min=40911, max=41183, avg=40999.82, stdev=58.45 00:31:42.546 clat percentiles (usec): 00:31:42.546 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:31:42.546 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:42.546 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:42.546 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:42.546 | 99.99th=[41157] 00:31:42.546 write: IOPS=508, BW=2036KiB/s (2085kB/s)(2048KiB/1006msec); 0 zone resets 00:31:42.546 slat (nsec): min=9554, max=40563, avg=10471.99, stdev=1475.54 00:31:42.546 clat (usec): min=154, max=290, avg=188.64, stdev=18.06 00:31:42.546 lat (usec): min=164, max=300, avg=199.11, stdev=18.29 00:31:42.546 clat percentiles (usec): 00:31:42.546 | 1.00th=[ 161], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 176], 00:31:42.546 | 30.00th=[ 180], 40.00th=[ 184], 50.00th=[ 188], 60.00th=[ 192], 00:31:42.546 | 70.00th=[ 194], 80.00th=[ 200], 90.00th=[ 206], 95.00th=[ 223], 00:31:42.546 | 99.00th=[ 260], 99.50th=[ 269], 99.90th=[ 289], 99.95th=[ 289], 00:31:42.546 | 99.99th=[ 289] 00:31:42.546 bw ( KiB/s): min= 4096, max= 4096, per=16.77%, avg=4096.00, stdev= 0.00, samples=1 00:31:42.546 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:31:42.546 lat (usec) : 250=94.76%, 500=1.12% 00:31:42.546 lat (msec) : 50=4.12% 00:31:42.546 cpu : usr=0.40%, sys=0.40%, ctx=536, majf=0, minf=1 00:31:42.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.546 issued rwts: total=22,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:42.546 job3: (groupid=0, jobs=1): err= 0: pid=3447060: Wed Nov 20 10:49:23 2024 00:31:42.546 read: IOPS=2164, BW=8659KiB/s (8867kB/s)(8668KiB/1001msec) 00:31:42.546 slat (nsec): min=7428, max=26679, avg=8456.86, stdev=883.85 00:31:42.546 clat (usec): min=204, max=417, avg=238.77, stdev=11.34 00:31:42.546 lat (usec): min=213, max=426, avg=247.23, stdev=11.33 00:31:42.546 clat percentiles (usec): 00:31:42.546 | 1.00th=[ 219], 5.00th=[ 223], 10.00th=[ 225], 20.00th=[ 229], 00:31:42.546 | 30.00th=[ 233], 40.00th=[ 237], 50.00th=[ 241], 60.00th=[ 243], 00:31:42.546 | 70.00th=[ 245], 80.00th=[ 249], 90.00th=[ 251], 95.00th=[ 255], 00:31:42.546 | 99.00th=[ 262], 99.50th=[ 265], 99.90th=[ 330], 99.95th=[ 334], 00:31:42.546 | 99.99th=[ 416] 00:31:42.546 write: IOPS=2557, BW=9.99MiB/s (10.5MB/s)(10.0MiB/1001msec); 0 zone resets 00:31:42.546 slat (nsec): min=10991, max=41820, avg=12126.25, stdev=1563.51 00:31:42.546 clat (usec): min=138, max=307, avg=164.42, stdev=11.77 00:31:42.546 lat (usec): min=150, max=346, avg=176.55, stdev=12.20 00:31:42.546 clat percentiles (usec): 00:31:42.546 | 1.00th=[ 149], 5.00th=[ 151], 10.00th=[ 153], 20.00th=[ 155], 00:31:42.546 | 30.00th=[ 157], 40.00th=[ 159], 50.00th=[ 161], 60.00th=[ 165], 00:31:42.546 | 70.00th=[ 167], 80.00th=[ 174], 90.00th=[ 180], 95.00th=[ 186], 00:31:42.546 | 99.00th=[ 204], 99.50th=[ 208], 99.90th=[ 273], 99.95th=[ 285], 00:31:42.546 | 99.99th=[ 310] 00:31:42.546 bw ( KiB/s): min=10960, max=10960, per=44.86%, avg=10960.00, stdev= 0.00, samples=1 00:31:42.546 iops : min= 2740, max= 2740, avg=2740.00, stdev= 0.00, samples=1 00:31:42.546 lat (usec) : 250=93.82%, 500=6.18% 00:31:42.546 cpu : usr=3.00%, sys=4.90%, ctx=4728, majf=0, minf=1 00:31:42.546 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:42.546 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.546 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:42.546 issued rwts: total=2167,2560,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:42.546 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:42.546 00:31:42.546 Run status group 0 (all jobs): 00:31:42.546 READ: bw=17.0MiB/s (17.8MB/s), 87.5KiB/s-8663KiB/s (89.6kB/s-8871kB/s), io=17.1MiB (17.9MB), run=1001-1006msec 00:31:42.546 WRITE: bw=23.9MiB/s (25.0MB/s), 2036KiB/s-9.99MiB/s (2085kB/s-10.5MB/s), io=24.0MiB (25.2MB), run=1001-1006msec 00:31:42.546 00:31:42.546 Disk stats (read/write): 00:31:42.546 nvme0n1: ios=1996/2048, merge=0/0, ticks=475/337, in_queue=812, util=86.77% 00:31:42.546 nvme0n2: ios=36/512, merge=0/0, ticks=879/95, in_queue=974, util=89.54% 00:31:42.546 nvme0n3: ios=56/512, merge=0/0, ticks=1582/99, in_queue=1681, util=99.07% 00:31:42.546 nvme0n4: ios=1967/2048, merge=0/0, ticks=1384/316, in_queue=1700, util=97.17% 00:31:42.546 10:49:23 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:31:42.546 [global] 00:31:42.546 thread=1 00:31:42.546 invalidate=1 00:31:42.546 rw=write 00:31:42.546 time_based=1 00:31:42.546 runtime=1 00:31:42.546 ioengine=libaio 00:31:42.546 direct=1 00:31:42.546 bs=4096 00:31:42.546 iodepth=128 00:31:42.546 norandommap=0 00:31:42.546 numjobs=1 00:31:42.546 00:31:42.546 verify_dump=1 00:31:42.546 verify_backlog=512 00:31:42.546 verify_state_save=0 00:31:42.546 do_verify=1 00:31:42.546 verify=crc32c-intel 00:31:42.546 [job0] 00:31:42.546 filename=/dev/nvme0n1 00:31:42.546 [job1] 00:31:42.546 filename=/dev/nvme0n2 00:31:42.546 [job2] 00:31:42.546 filename=/dev/nvme0n3 00:31:42.546 [job3] 00:31:42.546 filename=/dev/nvme0n4 00:31:42.546 Could not set queue depth (nvme0n1) 00:31:42.546 Could not set queue depth (nvme0n2) 00:31:42.546 Could not set queue depth (nvme0n3) 00:31:42.546 Could not set queue depth (nvme0n4) 00:31:42.804 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:42.804 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:42.804 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:42.804 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:42.804 fio-3.35 00:31:42.804 Starting 4 threads 00:31:44.175 00:31:44.175 job0: (groupid=0, jobs=1): err= 0: pid=3447441: Wed Nov 20 10:49:24 2024 00:31:44.175 read: IOPS=4571, BW=17.9MiB/s (18.7MB/s)(18.0MiB/1008msec) 00:31:44.175 slat (nsec): min=1364, max=12972k, avg=90428.75, stdev=769539.89 00:31:44.175 clat (usec): min=3190, max=55801, avg=11908.16, stdev=5145.54 00:31:44.175 lat (usec): min=3199, max=55809, avg=11998.59, stdev=5192.22 00:31:44.175 clat percentiles (usec): 00:31:44.175 | 1.00th=[ 7046], 5.00th=[ 8094], 10.00th=[ 8356], 20.00th=[ 8848], 00:31:44.175 | 30.00th=[ 9372], 40.00th=[ 9634], 50.00th=[10159], 60.00th=[10814], 00:31:44.175 | 70.00th=[11469], 80.00th=[13435], 90.00th=[16450], 95.00th=[24773], 00:31:44.175 | 99.00th=[30016], 99.50th=[40109], 99.90th=[40109], 99.95th=[40633], 00:31:44.175 | 99.99th=[55837] 00:31:44.175 write: IOPS=4909, BW=19.2MiB/s (20.1MB/s)(19.3MiB/1008msec); 0 zone resets 00:31:44.175 slat (usec): min=2, max=58605, avg=106.81, stdev=1192.91 00:31:44.175 clat (usec): min=1476, max=67646, avg=12264.29, stdev=7680.29 00:31:44.175 lat (usec): min=1989, max=103050, avg=12371.10, stdev=7839.49 00:31:44.175 clat percentiles (usec): 00:31:44.175 | 1.00th=[ 4113], 5.00th=[ 5932], 10.00th=[ 6783], 20.00th=[ 8848], 00:31:44.175 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[10159], 00:31:44.175 | 70.00th=[11600], 80.00th=[13960], 90.00th=[19530], 95.00th=[26346], 00:31:44.175 | 99.00th=[52167], 99.50th=[53216], 99.90th=[54264], 99.95th=[54264], 00:31:44.175 | 99.99th=[67634] 00:31:44.175 bw ( KiB/s): min=16384, max=22184, per=26.94%, avg=19284.00, stdev=4101.22, samples=2 00:31:44.175 iops : min= 4096, max= 5546, avg=4821.00, stdev=1025.30, samples=2 00:31:44.175 lat (msec) : 2=0.07%, 4=0.52%, 10=50.06%, 20=41.61%, 50=7.05% 00:31:44.176 lat (msec) : 100=0.68% 00:31:44.176 cpu : usr=4.87%, sys=4.67%, ctx=323, majf=0, minf=1 00:31:44.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.3% 00:31:44.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:44.176 issued rwts: total=4608,4949,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:44.176 job1: (groupid=0, jobs=1): err= 0: pid=3447453: Wed Nov 20 10:49:24 2024 00:31:44.176 read: IOPS=3943, BW=15.4MiB/s (16.2MB/s)(15.5MiB/1005msec) 00:31:44.176 slat (nsec): min=1113, max=19937k, avg=99611.58, stdev=826102.98 00:31:44.176 clat (usec): min=1200, max=81268, avg=14716.42, stdev=12653.08 00:31:44.176 lat (usec): min=1922, max=81275, avg=14816.03, stdev=12688.81 00:31:44.176 clat percentiles (usec): 00:31:44.176 | 1.00th=[ 2376], 5.00th=[ 6456], 10.00th=[ 8225], 20.00th=[ 8586], 00:31:44.176 | 30.00th=[ 9372], 40.00th=[10028], 50.00th=[10945], 60.00th=[12125], 00:31:44.176 | 70.00th=[13960], 80.00th=[15401], 90.00th=[20055], 95.00th=[55313], 00:31:44.176 | 99.00th=[67634], 99.50th=[68682], 99.90th=[81265], 99.95th=[81265], 00:31:44.176 | 99.99th=[81265] 00:31:44.176 write: IOPS=4075, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1005msec); 0 zone resets 00:31:44.176 slat (usec): min=2, max=46647, avg=134.08, stdev=1313.27 00:31:44.176 clat (usec): min=1077, max=70126, avg=15285.64, stdev=13531.19 00:31:44.176 lat (usec): min=1085, max=70133, avg=15419.73, stdev=13636.33 00:31:44.176 clat percentiles (usec): 00:31:44.176 | 1.00th=[ 3064], 5.00th=[ 5735], 10.00th=[ 6325], 20.00th=[ 8225], 00:31:44.176 | 30.00th=[ 8979], 40.00th=[ 9896], 50.00th=[10421], 60.00th=[11731], 00:31:44.176 | 70.00th=[13435], 80.00th=[16581], 90.00th=[33424], 95.00th=[50594], 00:31:44.176 | 99.00th=[68682], 99.50th=[69731], 99.90th=[69731], 99.95th=[69731], 00:31:44.176 | 99.99th=[69731] 00:31:44.176 bw ( KiB/s): min=12288, max=20480, per=22.89%, avg=16384.00, stdev=5792.62, samples=2 00:31:44.176 iops : min= 3072, max= 5120, avg=4096.00, stdev=1448.15, samples=2 00:31:44.176 lat (msec) : 2=0.79%, 4=1.85%, 10=39.47%, 20=45.76%, 50=6.75% 00:31:44.176 lat (msec) : 100=5.37% 00:31:44.176 cpu : usr=2.59%, sys=4.48%, ctx=279, majf=0, minf=1 00:31:44.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:44.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:44.176 issued rwts: total=3963,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:44.176 job2: (groupid=0, jobs=1): err= 0: pid=3447466: Wed Nov 20 10:49:24 2024 00:31:44.176 read: IOPS=3657, BW=14.3MiB/s (15.0MB/s)(15.0MiB/1049msec) 00:31:44.176 slat (nsec): min=1438, max=18263k, avg=108917.92, stdev=898470.57 00:31:44.176 clat (usec): min=2956, max=73808, avg=15721.51, stdev=9886.65 00:31:44.176 lat (usec): min=2964, max=88090, avg=15830.42, stdev=9960.91 00:31:44.176 clat percentiles (usec): 00:31:44.176 | 1.00th=[ 5342], 5.00th=[ 8291], 10.00th=[ 8979], 20.00th=[10552], 00:31:44.176 | 30.00th=[11731], 40.00th=[12518], 50.00th=[13173], 60.00th=[13960], 00:31:44.176 | 70.00th=[16057], 80.00th=[18744], 90.00th=[22938], 95.00th=[28443], 00:31:44.176 | 99.00th=[62653], 99.50th=[62653], 99.90th=[73925], 99.95th=[73925], 00:31:44.176 | 99.99th=[73925] 00:31:44.176 write: IOPS=3904, BW=15.3MiB/s (16.0MB/s)(16.0MiB/1049msec); 0 zone resets 00:31:44.176 slat (usec): min=2, max=11695, avg=126.37, stdev=794.81 00:31:44.176 clat (usec): min=433, max=91713, avg=17421.05, stdev=14806.49 00:31:44.176 lat (usec): min=463, max=91720, avg=17547.41, stdev=14889.91 00:31:44.176 clat percentiles (usec): 00:31:44.176 | 1.00th=[ 979], 5.00th=[ 3654], 10.00th=[ 5407], 20.00th=[ 7963], 00:31:44.176 | 30.00th=[10552], 40.00th=[12649], 50.00th=[13304], 60.00th=[14222], 00:31:44.176 | 70.00th=[17695], 80.00th=[23200], 90.00th=[31327], 95.00th=[51643], 00:31:44.176 | 99.00th=[83362], 99.50th=[90702], 99.90th=[91751], 99.95th=[91751], 00:31:44.176 | 99.99th=[91751] 00:31:44.176 bw ( KiB/s): min=16384, max=16384, per=22.89%, avg=16384.00, stdev= 0.00, samples=2 00:31:44.176 iops : min= 4096, max= 4096, avg=4096.00, stdev= 0.00, samples=2 00:31:44.176 lat (usec) : 500=0.03%, 750=0.08%, 1000=0.62% 00:31:44.176 lat (msec) : 2=0.79%, 4=1.92%, 10=19.84%, 20=57.86%, 50=14.47% 00:31:44.176 lat (msec) : 100=4.40% 00:31:44.176 cpu : usr=3.15%, sys=5.92%, ctx=317, majf=0, minf=2 00:31:44.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:31:44.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:44.176 issued rwts: total=3837,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:44.176 job3: (groupid=0, jobs=1): err= 0: pid=3447467: Wed Nov 20 10:49:24 2024 00:31:44.176 read: IOPS=5517, BW=21.6MiB/s (22.6MB/s)(21.7MiB/1006msec) 00:31:44.176 slat (nsec): min=1415, max=10740k, avg=88853.28, stdev=748832.48 00:31:44.176 clat (usec): min=5340, max=22838, avg=11631.10, stdev=2942.29 00:31:44.176 lat (usec): min=5342, max=22844, avg=11719.95, stdev=3004.87 00:31:44.176 clat percentiles (usec): 00:31:44.176 | 1.00th=[ 6063], 5.00th=[ 8094], 10.00th=[ 8848], 20.00th=[ 9634], 00:31:44.176 | 30.00th=[10028], 40.00th=[10552], 50.00th=[10814], 60.00th=[11469], 00:31:44.176 | 70.00th=[11863], 80.00th=[13435], 90.00th=[15926], 95.00th=[17957], 00:31:44.176 | 99.00th=[21365], 99.50th=[21890], 99.90th=[22938], 99.95th=[22938], 00:31:44.176 | 99.99th=[22938] 00:31:44.176 write: IOPS=5598, BW=21.9MiB/s (22.9MB/s)(22.0MiB/1006msec); 0 zone resets 00:31:44.176 slat (usec): min=2, max=19333, avg=83.07, stdev=674.99 00:31:44.176 clat (usec): min=1639, max=21822, avg=10682.39, stdev=2594.44 00:31:44.176 lat (usec): min=1653, max=21845, avg=10765.45, stdev=2634.97 00:31:44.176 clat percentiles (usec): 00:31:44.176 | 1.00th=[ 5080], 5.00th=[ 7046], 10.00th=[ 7373], 20.00th=[ 8356], 00:31:44.176 | 30.00th=[ 8848], 40.00th=[10421], 50.00th=[10945], 60.00th=[11207], 00:31:44.176 | 70.00th=[11600], 80.00th=[12387], 90.00th=[14746], 95.00th=[15533], 00:31:44.176 | 99.00th=[16909], 99.50th=[16909], 99.90th=[20317], 99.95th=[21365], 00:31:44.176 | 99.99th=[21890] 00:31:44.176 bw ( KiB/s): min=21968, max=23088, per=31.47%, avg=22528.00, stdev=791.96, samples=2 00:31:44.176 iops : min= 5492, max= 5772, avg=5632.00, stdev=197.99, samples=2 00:31:44.176 lat (msec) : 2=0.04%, 4=0.05%, 10=33.02%, 20=65.94%, 50=0.94% 00:31:44.176 cpu : usr=4.38%, sys=7.46%, ctx=310, majf=0, minf=1 00:31:44.176 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:44.176 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:44.176 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:44.176 issued rwts: total=5551,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:44.176 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:44.176 00:31:44.176 Run status group 0 (all jobs): 00:31:44.176 READ: bw=66.9MiB/s (70.1MB/s), 14.3MiB/s-21.6MiB/s (15.0MB/s-22.6MB/s), io=70.2MiB (73.6MB), run=1005-1049msec 00:31:44.176 WRITE: bw=69.9MiB/s (73.3MB/s), 15.3MiB/s-21.9MiB/s (16.0MB/s-22.9MB/s), io=73.3MiB (76.9MB), run=1005-1049msec 00:31:44.176 00:31:44.176 Disk stats (read/write): 00:31:44.176 nvme0n1: ios=4113/4230, merge=0/0, ticks=43827/48966, in_queue=92793, util=91.88% 00:31:44.176 nvme0n2: ios=3124/3362, merge=0/0, ticks=26182/29591, in_queue=55773, util=95.84% 00:31:44.176 nvme0n3: ios=3222/3584, merge=0/0, ticks=41040/62435, in_queue=103475, util=96.78% 00:31:44.176 nvme0n4: ios=4667/4864, merge=0/0, ticks=51646/49959, in_queue=101605, util=100.00% 00:31:44.176 10:49:24 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:31:44.176 [global] 00:31:44.176 thread=1 00:31:44.176 invalidate=1 00:31:44.176 rw=randwrite 00:31:44.176 time_based=1 00:31:44.176 runtime=1 00:31:44.176 ioengine=libaio 00:31:44.176 direct=1 00:31:44.176 bs=4096 00:31:44.176 iodepth=128 00:31:44.176 norandommap=0 00:31:44.176 numjobs=1 00:31:44.176 00:31:44.176 verify_dump=1 00:31:44.176 verify_backlog=512 00:31:44.176 verify_state_save=0 00:31:44.176 do_verify=1 00:31:44.176 verify=crc32c-intel 00:31:44.176 [job0] 00:31:44.176 filename=/dev/nvme0n1 00:31:44.176 [job1] 00:31:44.176 filename=/dev/nvme0n2 00:31:44.176 [job2] 00:31:44.176 filename=/dev/nvme0n3 00:31:44.176 [job3] 00:31:44.176 filename=/dev/nvme0n4 00:31:44.176 Could not set queue depth (nvme0n1) 00:31:44.176 Could not set queue depth (nvme0n2) 00:31:44.176 Could not set queue depth (nvme0n3) 00:31:44.176 Could not set queue depth (nvme0n4) 00:31:44.434 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:44.434 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:44.434 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:44.434 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:31:44.434 fio-3.35 00:31:44.434 Starting 4 threads 00:31:45.809 00:31:45.809 job0: (groupid=0, jobs=1): err= 0: pid=3447835: Wed Nov 20 10:49:26 2024 00:31:45.809 read: IOPS=1227, BW=4911KiB/s (5029kB/s)(4936KiB/1005msec) 00:31:45.809 slat (nsec): min=1167, max=26749k, avg=396223.80, stdev=2282956.03 00:31:45.809 clat (msec): min=3, max=104, avg=49.74, stdev=21.12 00:31:45.809 lat (msec): min=11, max=104, avg=50.13, stdev=21.30 00:31:45.809 clat percentiles (msec): 00:31:45.809 | 1.00th=[ 12], 5.00th=[ 13], 10.00th=[ 15], 20.00th=[ 31], 00:31:45.809 | 30.00th=[ 39], 40.00th=[ 45], 50.00th=[ 51], 60.00th=[ 59], 00:31:45.809 | 70.00th=[ 64], 80.00th=[ 70], 90.00th=[ 75], 95.00th=[ 79], 00:31:45.809 | 99.00th=[ 93], 99.50th=[ 96], 99.90th=[ 102], 99.95th=[ 105], 00:31:45.809 | 99.99th=[ 105] 00:31:45.809 write: IOPS=1528, BW=6113KiB/s (6260kB/s)(6144KiB/1005msec); 0 zone resets 00:31:45.809 slat (nsec): min=1778, max=28482k, avg=327992.28, stdev=1976870.35 00:31:45.809 clat (msec): min=4, max=100, avg=42.05, stdev=25.80 00:31:45.809 lat (msec): min=4, max=100, avg=42.37, stdev=26.01 00:31:45.809 clat percentiles (msec): 00:31:45.809 | 1.00th=[ 6], 5.00th=[ 9], 10.00th=[ 9], 20.00th=[ 12], 00:31:45.809 | 30.00th=[ 21], 40.00th=[ 33], 50.00th=[ 46], 60.00th=[ 52], 00:31:45.809 | 70.00th=[ 58], 80.00th=[ 68], 90.00th=[ 78], 95.00th=[ 84], 00:31:45.809 | 99.00th=[ 94], 99.50th=[ 99], 99.90th=[ 101], 99.95th=[ 101], 00:31:45.809 | 99.99th=[ 101] 00:31:45.809 bw ( KiB/s): min= 4440, max= 7848, per=10.16%, avg=6144.00, stdev=2409.82, samples=2 00:31:45.809 iops : min= 1110, max= 1962, avg=1536.00, stdev=602.45, samples=2 00:31:45.809 lat (msec) : 4=0.04%, 10=9.21%, 20=13.18%, 50=32.82%, 100=44.62% 00:31:45.809 lat (msec) : 250=0.14% 00:31:45.809 cpu : usr=1.00%, sys=1.29%, ctx=183, majf=0, minf=1 00:31:45.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.3%, 16=0.6%, 32=1.2%, >=64=97.7% 00:31:45.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:45.809 issued rwts: total=1234,1536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:45.809 job1: (groupid=0, jobs=1): err= 0: pid=3447836: Wed Nov 20 10:49:26 2024 00:31:45.809 read: IOPS=2524, BW=9.86MiB/s (10.3MB/s)(10.0MiB/1014msec) 00:31:45.809 slat (nsec): min=1833, max=17360k, avg=129847.22, stdev=988700.26 00:31:45.809 clat (usec): min=6911, max=38724, avg=16681.83, stdev=6225.11 00:31:45.809 lat (usec): min=6925, max=38731, avg=16811.67, stdev=6301.82 00:31:45.809 clat percentiles (usec): 00:31:45.809 | 1.00th=[ 7046], 5.00th=[ 9110], 10.00th=[ 9503], 20.00th=[11731], 00:31:45.809 | 30.00th=[12911], 40.00th=[13960], 50.00th=[15139], 60.00th=[17433], 00:31:45.809 | 70.00th=[19006], 80.00th=[22938], 90.00th=[25822], 95.00th=[27395], 00:31:45.809 | 99.00th=[32375], 99.50th=[33817], 99.90th=[38536], 99.95th=[38536], 00:31:45.809 | 99.99th=[38536] 00:31:45.809 write: IOPS=2846, BW=11.1MiB/s (11.7MB/s)(11.3MiB/1014msec); 0 zone resets 00:31:45.809 slat (usec): min=3, max=15871, avg=224.55, stdev=1340.91 00:31:45.809 clat (msec): min=3, max=133, avg=29.73, stdev=29.07 00:31:45.809 lat (msec): min=3, max=133, avg=29.96, stdev=29.27 00:31:45.809 clat percentiles (msec): 00:31:45.809 | 1.00th=[ 10], 5.00th=[ 12], 10.00th=[ 13], 20.00th=[ 14], 00:31:45.809 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 18], 60.00th=[ 21], 00:31:45.809 | 70.00th=[ 24], 80.00th=[ 34], 90.00th=[ 75], 95.00th=[ 108], 00:31:45.809 | 99.00th=[ 129], 99.50th=[ 132], 99.90th=[ 134], 99.95th=[ 134], 00:31:45.809 | 99.99th=[ 134] 00:31:45.809 bw ( KiB/s): min= 5688, max=16384, per=18.26%, avg=11036.00, stdev=7563.21, samples=2 00:31:45.809 iops : min= 1422, max= 4096, avg=2759.00, stdev=1890.80, samples=2 00:31:45.809 lat (msec) : 4=0.11%, 10=8.72%, 20=57.25%, 50=25.74%, 100=4.92% 00:31:45.809 lat (msec) : 250=3.25% 00:31:45.809 cpu : usr=2.17%, sys=4.54%, ctx=182, majf=0, minf=1 00:31:45.809 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.3%, 32=0.6%, >=64=98.8% 00:31:45.809 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.809 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:45.809 issued rwts: total=2560,2886,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.809 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:45.809 job2: (groupid=0, jobs=1): err= 0: pid=3447837: Wed Nov 20 10:49:26 2024 00:31:45.809 read: IOPS=5147, BW=20.1MiB/s (21.1MB/s)(20.2MiB/1004msec) 00:31:45.809 slat (nsec): min=1155, max=17757k, avg=91045.97, stdev=678545.22 00:31:45.809 clat (usec): min=2228, max=64104, avg=11225.70, stdev=5934.65 00:31:45.809 lat (usec): min=2233, max=64108, avg=11316.75, stdev=5996.08 00:31:45.809 clat percentiles (usec): 00:31:45.809 | 1.00th=[ 3392], 5.00th=[ 5080], 10.00th=[ 6063], 20.00th=[ 8160], 00:31:45.810 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9634], 60.00th=[10945], 00:31:45.810 | 70.00th=[12387], 80.00th=[12911], 90.00th=[15926], 95.00th=[18220], 00:31:45.810 | 99.00th=[30540], 99.50th=[48497], 99.90th=[64226], 99.95th=[64226], 00:31:45.810 | 99.99th=[64226] 00:31:45.810 write: IOPS=5609, BW=21.9MiB/s (23.0MB/s)(22.0MiB/1004msec); 0 zone resets 00:31:45.810 slat (nsec): min=1888, max=10016k, avg=85958.68, stdev=562217.89 00:31:45.810 clat (usec): min=202, max=77303, avg=12292.72, stdev=9391.69 00:31:45.810 lat (usec): min=215, max=77307, avg=12378.67, stdev=9439.73 00:31:45.810 clat percentiles (usec): 00:31:45.810 | 1.00th=[ 1975], 5.00th=[ 4883], 10.00th=[ 6587], 20.00th=[ 8455], 00:31:45.810 | 30.00th=[ 8979], 40.00th=[ 9241], 50.00th=[ 9503], 60.00th=[10290], 00:31:45.810 | 70.00th=[11207], 80.00th=[12387], 90.00th=[19530], 95.00th=[32375], 00:31:45.810 | 99.00th=[58983], 99.50th=[65274], 99.90th=[77071], 99.95th=[77071], 00:31:45.810 | 99.99th=[77071] 00:31:45.810 bw ( KiB/s): min=16584, max=27840, per=36.74%, avg=22212.00, stdev=7959.19, samples=2 00:31:45.810 iops : min= 4146, max= 6960, avg=5553.00, stdev=1989.80, samples=2 00:31:45.810 lat (usec) : 250=0.01%, 500=0.02%, 750=0.01%, 1000=0.10% 00:31:45.810 lat (msec) : 2=0.51%, 4=2.56%, 10=51.86%, 20=38.52%, 50=5.25% 00:31:45.810 lat (msec) : 100=1.17% 00:31:45.810 cpu : usr=3.99%, sys=6.18%, ctx=388, majf=0, minf=2 00:31:45.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.3%, >=64=99.4% 00:31:45.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:45.810 issued rwts: total=5168,5632,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:45.810 job3: (groupid=0, jobs=1): err= 0: pid=3447838: Wed Nov 20 10:49:26 2024 00:31:45.810 read: IOPS=5049, BW=19.7MiB/s (20.7MB/s)(20.0MiB/1014msec) 00:31:45.810 slat (nsec): min=1733, max=28803k, avg=99083.33, stdev=927291.14 00:31:45.810 clat (usec): min=1376, max=55594, avg=14118.41, stdev=8454.25 00:31:45.810 lat (usec): min=1384, max=58466, avg=14217.49, stdev=8520.15 00:31:45.810 clat percentiles (usec): 00:31:45.810 | 1.00th=[ 1680], 5.00th=[ 6980], 10.00th=[ 7504], 20.00th=[ 8291], 00:31:45.810 | 30.00th=[ 8979], 40.00th=[10159], 50.00th=[11863], 60.00th=[14353], 00:31:45.810 | 70.00th=[15926], 80.00th=[18220], 90.00th=[24249], 95.00th=[27395], 00:31:45.810 | 99.00th=[51119], 99.50th=[53216], 99.90th=[55837], 99.95th=[55837], 00:31:45.810 | 99.99th=[55837] 00:31:45.810 write: IOPS=5197, BW=20.3MiB/s (21.3MB/s)(20.6MiB/1014msec); 0 zone resets 00:31:45.810 slat (nsec): min=1777, max=18500k, avg=62825.23, stdev=667092.52 00:31:45.810 clat (usec): min=418, max=36225, avg=10699.08, stdev=6077.85 00:31:45.810 lat (usec): min=505, max=39818, avg=10761.90, stdev=6129.99 00:31:45.810 clat percentiles (usec): 00:31:45.810 | 1.00th=[ 1369], 5.00th=[ 2868], 10.00th=[ 4047], 20.00th=[ 6063], 00:31:45.810 | 30.00th=[ 6718], 40.00th=[ 8094], 50.00th=[ 9372], 60.00th=[10683], 00:31:45.810 | 70.00th=[12518], 80.00th=[15533], 90.00th=[20841], 95.00th=[22152], 00:31:45.810 | 99.00th=[27132], 99.50th=[31589], 99.90th=[33162], 99.95th=[35914], 00:31:45.810 | 99.99th=[36439] 00:31:45.810 bw ( KiB/s): min=16424, max=24720, per=34.03%, avg=20572.00, stdev=5866.16, samples=2 00:31:45.810 iops : min= 4106, max= 6180, avg=5143.00, stdev=1466.54, samples=2 00:31:45.810 lat (usec) : 500=0.01%, 750=0.08%, 1000=0.25% 00:31:45.810 lat (msec) : 2=1.74%, 4=4.15%, 10=41.55%, 20=38.51%, 50=12.83% 00:31:45.810 lat (msec) : 100=0.89% 00:31:45.810 cpu : usr=3.46%, sys=6.02%, ctx=313, majf=0, minf=1 00:31:45.810 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.3%, >=64=99.4% 00:31:45.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:45.810 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:31:45.810 issued rwts: total=5120,5270,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:45.810 latency : target=0, window=0, percentile=100.00%, depth=128 00:31:45.810 00:31:45.810 Run status group 0 (all jobs): 00:31:45.810 READ: bw=54.2MiB/s (56.9MB/s), 4911KiB/s-20.1MiB/s (5029kB/s-21.1MB/s), io=55.0MiB (57.7MB), run=1004-1014msec 00:31:45.810 WRITE: bw=59.0MiB/s (61.9MB/s), 6113KiB/s-21.9MiB/s (6260kB/s-23.0MB/s), io=59.9MiB (62.8MB), run=1004-1014msec 00:31:45.810 00:31:45.810 Disk stats (read/write): 00:31:45.810 nvme0n1: ios=1045/1375, merge=0/0, ticks=17633/19314, in_queue=36947, util=91.28% 00:31:45.810 nvme0n2: ios=2075/2559, merge=0/0, ticks=35809/69035, in_queue=104844, util=99.59% 00:31:45.810 nvme0n3: ios=4623/5004, merge=0/0, ticks=36243/36491, in_queue=72734, util=99.27% 00:31:45.810 nvme0n4: ios=4307/4608, merge=0/0, ticks=55772/48754, in_queue=104526, util=99.69% 00:31:45.810 10:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:31:45.810 10:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=3448019 00:31:45.810 10:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:31:45.810 10:49:26 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:31:45.810 [global] 00:31:45.810 thread=1 00:31:45.810 invalidate=1 00:31:45.810 rw=read 00:31:45.810 time_based=1 00:31:45.810 runtime=10 00:31:45.810 ioengine=libaio 00:31:45.810 direct=1 00:31:45.810 bs=4096 00:31:45.810 iodepth=1 00:31:45.810 norandommap=1 00:31:45.810 numjobs=1 00:31:45.810 00:31:45.810 [job0] 00:31:45.810 filename=/dev/nvme0n1 00:31:45.810 [job1] 00:31:45.810 filename=/dev/nvme0n2 00:31:45.810 [job2] 00:31:45.810 filename=/dev/nvme0n3 00:31:45.810 [job3] 00:31:45.810 filename=/dev/nvme0n4 00:31:45.810 Could not set queue depth (nvme0n1) 00:31:45.810 Could not set queue depth (nvme0n2) 00:31:45.810 Could not set queue depth (nvme0n3) 00:31:45.810 Could not set queue depth (nvme0n4) 00:31:46.079 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:46.079 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:46.079 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:46.079 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:31:46.079 fio-3.35 00:31:46.079 Starting 4 threads 00:31:48.605 10:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:31:48.862 10:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:31:48.862 fio: io_u error on file /dev/nvme0n4: Operation not supported: read offset=31318016, buflen=4096 00:31:48.862 fio: pid=3448211, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:49.120 10:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:49.120 10:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:31:49.120 fio: io_u error on file /dev/nvme0n3: Operation not supported: read offset=294912, buflen=4096 00:31:49.120 fio: pid=3448210, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:49.377 10:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:49.377 10:49:29 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:31:49.377 fio: io_u error on file /dev/nvme0n1: Operation not supported: read offset=311296, buflen=4096 00:31:49.377 fio: pid=3448208, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:49.377 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:49.377 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:31:49.377 fio: io_u error on file /dev/nvme0n2: Operation not supported: read offset=45023232, buflen=4096 00:31:49.377 fio: pid=3448209, err=95/file:io_u.c:1889, func=io_u error, error=Operation not supported 00:31:49.635 00:31:49.635 job0: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3448208: Wed Nov 20 10:49:30 2024 00:31:49.635 read: IOPS=24, BW=97.0KiB/s (99.3kB/s)(304KiB/3135msec) 00:31:49.635 slat (usec): min=7, max=19774, avg=430.46, stdev=2596.33 00:31:49.635 clat (usec): min=537, max=42284, avg=40533.52, stdev=4658.59 00:31:49.635 lat (usec): min=603, max=61090, avg=40969.35, stdev=5425.79 00:31:49.635 clat percentiles (usec): 00:31:49.635 | 1.00th=[ 537], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:49.635 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:49.635 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[42206], 00:31:49.635 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:31:49.635 | 99.99th=[42206] 00:31:49.635 bw ( KiB/s): min= 92, max= 104, per=0.43%, avg=96.67, stdev= 3.93, samples=6 00:31:49.635 iops : min= 23, max= 26, avg=24.17, stdev= 0.98, samples=6 00:31:49.635 lat (usec) : 750=1.30% 00:31:49.635 lat (msec) : 50=97.40% 00:31:49.635 cpu : usr=0.10%, sys=0.00%, ctx=80, majf=0, minf=1 00:31:49.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.635 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.635 issued rwts: total=77,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:49.635 job1: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3448209: Wed Nov 20 10:49:30 2024 00:31:49.635 read: IOPS=3291, BW=12.9MiB/s (13.5MB/s)(42.9MiB/3340msec) 00:31:49.635 slat (usec): min=6, max=18513, avg=11.21, stdev=250.18 00:31:49.635 clat (usec): min=165, max=41429, avg=289.45, stdev=1984.34 00:31:49.635 lat (usec): min=172, max=49968, avg=300.66, stdev=2017.00 00:31:49.635 clat percentiles (usec): 00:31:49.635 | 1.00th=[ 174], 5.00th=[ 176], 10.00th=[ 178], 20.00th=[ 182], 00:31:49.635 | 30.00th=[ 184], 40.00th=[ 188], 50.00th=[ 192], 60.00th=[ 196], 00:31:49.635 | 70.00th=[ 200], 80.00th=[ 206], 90.00th=[ 210], 95.00th=[ 215], 00:31:49.635 | 99.00th=[ 229], 99.50th=[ 277], 99.90th=[41157], 99.95th=[41157], 00:31:49.635 | 99.99th=[41681] 00:31:49.635 bw ( KiB/s): min= 96, max=21016, per=63.41%, avg=14266.50, stdev=8039.37, samples=6 00:31:49.635 iops : min= 24, max= 5254, avg=3566.50, stdev=2009.91, samples=6 00:31:49.635 lat (usec) : 250=99.40%, 500=0.35% 00:31:49.635 lat (msec) : 50=0.24% 00:31:49.635 cpu : usr=0.90%, sys=2.91%, ctx=10999, majf=0, minf=2 00:31:49.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.635 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.635 issued rwts: total=10993,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:49.635 job2: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3448210: Wed Nov 20 10:49:30 2024 00:31:49.635 read: IOPS=24, BW=97.7KiB/s (100kB/s)(288KiB/2948msec) 00:31:49.635 slat (usec): min=11, max=14826, avg=225.83, stdev=1732.56 00:31:49.635 clat (usec): min=467, max=41253, avg=40415.62, stdev=4774.73 00:31:49.635 lat (usec): min=506, max=55992, avg=40644.28, stdev=5112.48 00:31:49.635 clat percentiles (usec): 00:31:49.635 | 1.00th=[ 469], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:31:49.635 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:31:49.635 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:31:49.635 | 99.00th=[41157], 99.50th=[41157], 99.90th=[41157], 99.95th=[41157], 00:31:49.635 | 99.99th=[41157] 00:31:49.635 bw ( KiB/s): min= 96, max= 104, per=0.44%, avg=99.20, stdev= 4.38, samples=5 00:31:49.635 iops : min= 24, max= 26, avg=24.80, stdev= 1.10, samples=5 00:31:49.635 lat (usec) : 500=1.37% 00:31:49.635 lat (msec) : 50=97.26% 00:31:49.635 cpu : usr=0.14%, sys=0.00%, ctx=74, majf=0, minf=2 00:31:49.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.635 complete : 0=1.4%, 4=98.6%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.635 issued rwts: total=73,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:49.635 job3: (groupid=0, jobs=1): err=95 (file:io_u.c:1889, func=io_u error, error=Operation not supported): pid=3448211: Wed Nov 20 10:49:30 2024 00:31:49.635 read: IOPS=2777, BW=10.8MiB/s (11.4MB/s)(29.9MiB/2753msec) 00:31:49.635 slat (nsec): min=6913, max=43987, avg=8496.03, stdev=1574.15 00:31:49.635 clat (usec): min=177, max=42076, avg=346.84, stdev=2327.96 00:31:49.635 lat (usec): min=192, max=42084, avg=355.33, stdev=2328.16 00:31:49.635 clat percentiles (usec): 00:31:49.635 | 1.00th=[ 190], 5.00th=[ 192], 10.00th=[ 194], 20.00th=[ 196], 00:31:49.635 | 30.00th=[ 198], 40.00th=[ 206], 50.00th=[ 210], 60.00th=[ 212], 00:31:49.635 | 70.00th=[ 217], 80.00th=[ 221], 90.00th=[ 253], 95.00th=[ 277], 00:31:49.635 | 99.00th=[ 293], 99.50th=[ 310], 99.90th=[41157], 99.95th=[41157], 00:31:49.635 | 99.99th=[42206] 00:31:49.635 bw ( KiB/s): min= 192, max=19128, per=54.23%, avg=12201.60, stdev=8231.46, samples=5 00:31:49.635 iops : min= 48, max= 4782, avg=3050.40, stdev=2057.86, samples=5 00:31:49.635 lat (usec) : 250=89.97%, 500=9.68%, 750=0.01% 00:31:49.635 lat (msec) : 50=0.33% 00:31:49.635 cpu : usr=1.16%, sys=4.83%, ctx=7648, majf=0, minf=2 00:31:49.635 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:31:49.635 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.635 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:31:49.635 issued rwts: total=7647,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:31:49.635 latency : target=0, window=0, percentile=100.00%, depth=1 00:31:49.635 00:31:49.635 Run status group 0 (all jobs): 00:31:49.635 READ: bw=22.0MiB/s (23.0MB/s), 97.0KiB/s-12.9MiB/s (99.3kB/s-13.5MB/s), io=73.4MiB (76.9MB), run=2753-3340msec 00:31:49.635 00:31:49.635 Disk stats (read/write): 00:31:49.635 nvme0n1: ios=75/0, merge=0/0, ticks=3041/0, in_queue=3041, util=94.85% 00:31:49.635 nvme0n2: ios=11016/0, merge=0/0, ticks=3648/0, in_queue=3648, util=98.14% 00:31:49.635 nvme0n3: ios=70/0, merge=0/0, ticks=2830/0, in_queue=2830, util=96.05% 00:31:49.635 nvme0n4: ios=7637/0, merge=0/0, ticks=2442/0, in_queue=2442, util=96.45% 00:31:49.635 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:49.635 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:31:49.893 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:49.893 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:31:50.150 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:50.150 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:31:50.407 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:31:50.407 10:49:30 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:31:50.407 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:31:50.407 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # wait 3448019 00:31:50.407 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:31:50.407 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:31:50.665 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:31:50.665 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:31:50.665 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1223 -- # local i=0 00:31:50.665 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:31:50.665 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:50.665 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:31:50.665 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:31:50.665 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1235 -- # return 0 00:31:50.665 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:31:50.665 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:31:50.665 nvmf hotplug test: fio failed as expected 00:31:50.665 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:31:50.923 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:31:50.923 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:31:50.923 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:31:50.923 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:31:50.923 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:31:50.923 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@335 -- # nvmfcleanup 00:31:50.923 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@99 -- # sync 00:31:50.923 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:31:50.923 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@102 -- # set +e 00:31:50.923 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@103 -- # for i in {1..20} 00:31:50.923 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:31:50.923 rmmod nvme_tcp 00:31:50.923 rmmod nvme_fabrics 00:31:50.923 rmmod nvme_keyring 00:31:50.923 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:31:50.923 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@106 -- # set -e 00:31:50.924 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@107 -- # return 0 00:31:50.924 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@336 -- # '[' -n 3445390 ']' 00:31:50.924 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@337 -- # killprocess 3445390 00:31:50.924 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@954 -- # '[' -z 3445390 ']' 00:31:50.924 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@958 -- # kill -0 3445390 00:31:50.924 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # uname 00:31:50.924 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:50.924 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3445390 00:31:50.924 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:50.924 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:50.924 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3445390' 00:31:50.924 killing process with pid 3445390 00:31:50.924 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@973 -- # kill 3445390 00:31:50.924 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@978 -- # wait 3445390 00:31:51.183 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:31:51.183 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@342 -- # nvmf_fini 00:31:51.183 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@264 -- # local dev 00:31:51.183 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@267 -- # remove_target_ns 00:31:51.183 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:51.183 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:51.184 10:49:31 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@268 -- # delete_main_bridge 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@130 -- # return 0 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # _dev=0 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@41 -- # dev_map=() 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/setup.sh@284 -- # iptr 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@542 -- # iptables-save 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- nvmf/common.sh@542 -- # iptables-restore 00:31:53.195 00:31:53.195 real 0m26.007s 00:31:53.195 user 1m31.245s 00:31:53.195 sys 0m11.148s 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@1130 -- # xtrace_disable 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:31:53.195 ************************************ 00:31:53.195 END TEST nvmf_fio_target 00:31:53.195 ************************************ 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:31:53.195 ************************************ 00:31:53.195 START TEST nvmf_bdevio 00:31:53.195 ************************************ 00:31:53.195 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --interrupt-mode 00:31:53.455 * Looking for test storage... 00:31:53.455 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:31:53.455 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:31:53.455 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lcov --version 00:31:53.455 10:49:33 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@333 -- # local ver1 ver1_l 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@334 -- # local ver2 ver2_l 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # IFS=.-: 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@336 -- # read -ra ver1 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # IFS=.-: 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@337 -- # read -ra ver2 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@338 -- # local 'op=<' 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@340 -- # ver1_l=2 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@341 -- # ver2_l=1 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@344 -- # case "$op" in 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@345 -- # : 1 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v = 0 )) 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # decimal 1 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=1 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 1 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@365 -- # ver1[v]=1 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # decimal 2 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@353 -- # local d=2 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@355 -- # echo 2 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@366 -- # ver2[v]=2 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@368 -- # return 0 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:31:53.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.455 --rc genhtml_branch_coverage=1 00:31:53.455 --rc genhtml_function_coverage=1 00:31:53.455 --rc genhtml_legend=1 00:31:53.455 --rc geninfo_all_blocks=1 00:31:53.455 --rc geninfo_unexecuted_blocks=1 00:31:53.455 00:31:53.455 ' 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:31:53.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.455 --rc genhtml_branch_coverage=1 00:31:53.455 --rc genhtml_function_coverage=1 00:31:53.455 --rc genhtml_legend=1 00:31:53.455 --rc geninfo_all_blocks=1 00:31:53.455 --rc geninfo_unexecuted_blocks=1 00:31:53.455 00:31:53.455 ' 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:31:53.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.455 --rc genhtml_branch_coverage=1 00:31:53.455 --rc genhtml_function_coverage=1 00:31:53.455 --rc genhtml_legend=1 00:31:53.455 --rc geninfo_all_blocks=1 00:31:53.455 --rc geninfo_unexecuted_blocks=1 00:31:53.455 00:31:53.455 ' 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:31:53.455 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:31:53.455 --rc genhtml_branch_coverage=1 00:31:53.455 --rc genhtml_function_coverage=1 00:31:53.455 --rc genhtml_legend=1 00:31:53.455 --rc geninfo_all_blocks=1 00:31:53.455 --rc geninfo_unexecuted_blocks=1 00:31:53.455 00:31:53.455 ' 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:31:53.455 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@15 -- # shopt -s extglob 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@50 -- # : 0 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@54 -- # have_pci_nics=0 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@296 -- # prepare_net_devs 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@258 -- # local -g is_hw=no 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@260 -- # remove_target_ns 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@125 -- # xtrace_disable 00:31:53.456 10:49:34 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.025 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:00.025 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@131 -- # pci_devs=() 00:32:00.025 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@131 -- # local -a pci_devs 00:32:00.025 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@132 -- # pci_net_devs=() 00:32:00.025 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@133 -- # pci_drivers=() 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@133 -- # local -A pci_drivers 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@135 -- # net_devs=() 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@135 -- # local -ga net_devs 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@136 -- # e810=() 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@136 -- # local -ga e810 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@137 -- # x722=() 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@137 -- # local -ga x722 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@138 -- # mlx=() 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@138 -- # local -ga mlx 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:00.026 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:00.026 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:00.026 Found net devices under 0000:86:00.0: cvl_0_0 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:00.026 Found net devices under 0000:86:00.1: cvl_0_1 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@262 -- # is_hw=yes 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@257 -- # create_target_ns 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@28 -- # local -g _dev 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:32:00.026 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # ips=() 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772161 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:32:00.027 10.0.0.1 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@11 -- # local val=167772162 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:32:00.027 10.0.0.2 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:32:00.027 10:49:39 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@38 -- # ping_ips 1 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:00.027 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:00.027 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.412 ms 00:32:00.027 00:32:00.027 --- 10.0.0.1 ping statistics --- 00:32:00.027 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.027 rtt min/avg/max/mdev = 0.412/0.412/0.412/0.000 ms 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:32:00.027 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:32:00.028 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:00.028 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.147 ms 00:32:00.028 00:32:00.028 --- 10.0.0.2 ping statistics --- 00:32:00.028 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:00.028 rtt min/avg/max/mdev = 0.147/0.147/0.147/0.000 ms 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair++ )) 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@270 -- # return 0 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=initiator1 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # return 1 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev= 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@169 -- # return 0 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target0 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # get_net_dev target1 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@107 -- # local dev=target1 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@109 -- # return 1 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@168 -- # dev= 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@169 -- # return 0 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@328 -- # nvmfpid=3452478 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@329 -- # waitforlisten 3452478 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x78 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@835 -- # '[' -z 3452478 ']' 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:00.028 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:00.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.029 [2024-11-20 10:49:40.224239] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:00.029 [2024-11-20 10:49:40.225151] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:32:00.029 [2024-11-20 10:49:40.225187] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:00.029 [2024-11-20 10:49:40.304014] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:32:00.029 [2024-11-20 10:49:40.345862] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:00.029 [2024-11-20 10:49:40.345899] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:00.029 [2024-11-20 10:49:40.345907] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:00.029 [2024-11-20 10:49:40.345912] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:00.029 [2024-11-20 10:49:40.345918] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:00.029 [2024-11-20 10:49:40.347494] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 4 00:32:00.029 [2024-11-20 10:49:40.347597] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 5 00:32:00.029 [2024-11-20 10:49:40.347701] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:32:00.029 [2024-11-20 10:49:40.347702] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 6 00:32:00.029 [2024-11-20 10:49:40.413566] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:00.029 [2024-11-20 10:49:40.414102] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:00.029 [2024-11-20 10:49:40.414467] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:32:00.029 [2024-11-20 10:49:40.414835] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:00.029 [2024-11-20 10:49:40.414885] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@868 -- # return 0 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.029 [2024-11-20 10:49:40.484553] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.029 Malloc0 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:00.029 [2024-11-20 10:49:40.572766] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # config=() 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@372 -- # local subsystem config 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:32:00.029 { 00:32:00.029 "params": { 00:32:00.029 "name": "Nvme$subsystem", 00:32:00.029 "trtype": "$TEST_TRANSPORT", 00:32:00.029 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:00.029 "adrfam": "ipv4", 00:32:00.029 "trsvcid": "$NVMF_PORT", 00:32:00.029 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:00.029 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:00.029 "hdgst": ${hdgst:-false}, 00:32:00.029 "ddgst": ${ddgst:-false} 00:32:00.029 }, 00:32:00.029 "method": "bdev_nvme_attach_controller" 00:32:00.029 } 00:32:00.029 EOF 00:32:00.029 )") 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@394 -- # cat 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@396 -- # jq . 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@397 -- # IFS=, 00:32:00.029 10:49:40 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:32:00.029 "params": { 00:32:00.029 "name": "Nvme1", 00:32:00.029 "trtype": "tcp", 00:32:00.029 "traddr": "10.0.0.2", 00:32:00.029 "adrfam": "ipv4", 00:32:00.029 "trsvcid": "4420", 00:32:00.029 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:00.029 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:00.029 "hdgst": false, 00:32:00.029 "ddgst": false 00:32:00.029 }, 00:32:00.029 "method": "bdev_nvme_attach_controller" 00:32:00.029 }' 00:32:00.029 [2024-11-20 10:49:40.625296] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:32:00.029 [2024-11-20 10:49:40.625342] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3452520 ] 00:32:00.029 [2024-11-20 10:49:40.699793] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:32:00.029 [2024-11-20 10:49:40.743913] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:00.029 [2024-11-20 10:49:40.744021] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:00.029 [2024-11-20 10:49:40.744022] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:32:00.286 I/O targets: 00:32:00.286 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:32:00.286 00:32:00.286 00:32:00.286 CUnit - A unit testing framework for C - Version 2.1-3 00:32:00.286 http://cunit.sourceforge.net/ 00:32:00.286 00:32:00.286 00:32:00.287 Suite: bdevio tests on: Nvme1n1 00:32:00.287 Test: blockdev write read block ...passed 00:32:00.287 Test: blockdev write zeroes read block ...passed 00:32:00.287 Test: blockdev write zeroes read no split ...passed 00:32:00.287 Test: blockdev write zeroes read split ...passed 00:32:00.544 Test: blockdev write zeroes read split partial ...passed 00:32:00.544 Test: blockdev reset ...[2024-11-20 10:49:41.044774] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 1] resetting controller 00:32:00.544 [2024-11-20 10:49:41.044834] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1792340 (9): Bad file descriptor 00:32:00.544 [2024-11-20 10:49:41.139162] bdev_nvme.c:2282:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [nqn.2016-06.io.spdk:cnode1, 2] Resetting controller successful. 00:32:00.544 passed 00:32:00.544 Test: blockdev write read 8 blocks ...passed 00:32:00.544 Test: blockdev write read size > 128k ...passed 00:32:00.544 Test: blockdev write read invalid size ...passed 00:32:00.544 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:32:00.544 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:32:00.544 Test: blockdev write read max offset ...passed 00:32:00.802 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:32:00.802 Test: blockdev writev readv 8 blocks ...passed 00:32:00.802 Test: blockdev writev readv 30 x 1block ...passed 00:32:00.802 Test: blockdev writev readv block ...passed 00:32:00.802 Test: blockdev writev readv size > 128k ...passed 00:32:00.802 Test: blockdev writev readv size > 128k in two iovs ...passed 00:32:00.802 Test: blockdev comparev and writev ...[2024-11-20 10:49:41.351100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:00.802 [2024-11-20 10:49:41.351130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:32:00.802 [2024-11-20 10:49:41.351144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:00.802 [2024-11-20 10:49:41.351151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:32:00.802 [2024-11-20 10:49:41.351435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:00.802 [2024-11-20 10:49:41.351447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:32:00.802 [2024-11-20 10:49:41.351459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:00.802 [2024-11-20 10:49:41.351466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:32:00.802 [2024-11-20 10:49:41.351750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:00.802 [2024-11-20 10:49:41.351760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:32:00.802 [2024-11-20 10:49:41.351771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:00.802 [2024-11-20 10:49:41.351778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:32:00.802 [2024-11-20 10:49:41.352065] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:00.802 [2024-11-20 10:49:41.352077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:32:00.802 [2024-11-20 10:49:41.352088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:32:00.802 [2024-11-20 10:49:41.352096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:32:00.802 passed 00:32:00.802 Test: blockdev nvme passthru rw ...passed 00:32:00.802 Test: blockdev nvme passthru vendor specific ...[2024-11-20 10:49:41.435587] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:00.802 [2024-11-20 10:49:41.435604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:32:00.802 [2024-11-20 10:49:41.435717] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:00.802 [2024-11-20 10:49:41.435727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:32:00.802 [2024-11-20 10:49:41.435833] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:00.802 [2024-11-20 10:49:41.435843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:32:00.802 [2024-11-20 10:49:41.435956] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:32:00.802 [2024-11-20 10:49:41.435966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:32:00.803 passed 00:32:00.803 Test: blockdev nvme admin passthru ...passed 00:32:00.803 Test: blockdev copy ...passed 00:32:00.803 00:32:00.803 Run Summary: Type Total Ran Passed Failed Inactive 00:32:00.803 suites 1 1 n/a 0 0 00:32:00.803 tests 23 23 23 0 0 00:32:00.803 asserts 152 152 152 0 n/a 00:32:00.803 00:32:00.803 Elapsed time = 1.212 seconds 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@99 -- # sync 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@102 -- # set +e 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:01.062 rmmod nvme_tcp 00:32:01.062 rmmod nvme_fabrics 00:32:01.062 rmmod nvme_keyring 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@106 -- # set -e 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@107 -- # return 0 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@336 -- # '[' -n 3452478 ']' 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@337 -- # killprocess 3452478 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@954 -- # '[' -z 3452478 ']' 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@958 -- # kill -0 3452478 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # uname 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3452478 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@960 -- # process_name=reactor_3 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@964 -- # '[' reactor_3 = sudo ']' 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3452478' 00:32:01.062 killing process with pid 3452478 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@973 -- # kill 3452478 00:32:01.062 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@978 -- # wait 3452478 00:32:01.321 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:01.321 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@342 -- # nvmf_fini 00:32:01.321 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@264 -- # local dev 00:32:01.321 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@267 -- # remove_target_ns 00:32:01.321 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:01.322 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:01.322 10:49:41 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:03.858 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@268 -- # delete_main_bridge 00:32:03.858 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:03.858 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@130 -- # return 0 00:32:03.858 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:03.858 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:32:03.858 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:03.858 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:32:03.858 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:32:03.858 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:03.858 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:32:03.858 10:49:43 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # _dev=0 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@41 -- # dev_map=() 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/setup.sh@284 -- # iptr 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@542 -- # iptables-save 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- nvmf/common.sh@542 -- # iptables-restore 00:32:03.858 00:32:03.858 real 0m10.125s 00:32:03.858 user 0m8.738s 00:32:03.858 sys 0m5.292s 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:32:03.858 ************************************ 00:32:03.858 END TEST nvmf_bdevio 00:32:03.858 ************************************ 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@30 -- # [[ tcp == \t\c\p ]] 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@32 -- # [[ phy != phy ]] 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:03.858 ************************************ 00:32:03.858 START TEST nvmf_zcopy 00:32:03.858 ************************************ 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp --interrupt-mode 00:32:03.858 * Looking for test storage... 00:32:03.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lcov --version 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # IFS=.-: 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@336 -- # read -ra ver1 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # IFS=.-: 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@337 -- # read -ra ver2 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@338 -- # local 'op=<' 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@340 -- # ver1_l=2 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@341 -- # ver2_l=1 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@344 -- # case "$op" in 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@345 -- # : 1 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # decimal 1 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=1 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 1 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@365 -- # ver1[v]=1 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # decimal 2 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@353 -- # local d=2 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@355 -- # echo 2 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@366 -- # ver2[v]=2 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@368 -- # return 0 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:03.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.858 --rc genhtml_branch_coverage=1 00:32:03.858 --rc genhtml_function_coverage=1 00:32:03.858 --rc genhtml_legend=1 00:32:03.858 --rc geninfo_all_blocks=1 00:32:03.858 --rc geninfo_unexecuted_blocks=1 00:32:03.858 00:32:03.858 ' 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:03.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.858 --rc genhtml_branch_coverage=1 00:32:03.858 --rc genhtml_function_coverage=1 00:32:03.858 --rc genhtml_legend=1 00:32:03.858 --rc geninfo_all_blocks=1 00:32:03.858 --rc geninfo_unexecuted_blocks=1 00:32:03.858 00:32:03.858 ' 00:32:03.858 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:03.858 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.858 --rc genhtml_branch_coverage=1 00:32:03.859 --rc genhtml_function_coverage=1 00:32:03.859 --rc genhtml_legend=1 00:32:03.859 --rc geninfo_all_blocks=1 00:32:03.859 --rc geninfo_unexecuted_blocks=1 00:32:03.859 00:32:03.859 ' 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:03.859 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:03.859 --rc genhtml_branch_coverage=1 00:32:03.859 --rc genhtml_function_coverage=1 00:32:03.859 --rc genhtml_legend=1 00:32:03.859 --rc geninfo_all_blocks=1 00:32:03.859 --rc geninfo_unexecuted_blocks=1 00:32:03.859 00:32:03.859 ' 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@15 -- # shopt -s extglob 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@50 -- # : 0 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@296 -- # prepare_net_devs 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@258 -- # local -g is_hw=no 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@260 -- # remove_target_ns 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@125 -- # xtrace_disable 00:32:03.859 10:49:44 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@131 -- # pci_devs=() 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@131 -- # local -a pci_devs 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@132 -- # pci_net_devs=() 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@133 -- # pci_drivers=() 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@133 -- # local -A pci_drivers 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@135 -- # net_devs=() 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@135 -- # local -ga net_devs 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@136 -- # e810=() 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@136 -- # local -ga e810 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@137 -- # x722=() 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@137 -- # local -ga x722 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@138 -- # mlx=() 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@138 -- # local -ga mlx 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:10.433 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:10.433 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:10.433 Found net devices under 0000:86:00.0: cvl_0_0 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:10.433 Found net devices under 0000:86:00.1: cvl_0_1 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@262 -- # is_hw=yes 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@257 -- # create_target_ns 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:32:10.433 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@28 -- # local -g _dev 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # ips=() 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:32:10.434 10:49:49 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772161 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:32:10.434 10.0.0.1 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@11 -- # local val=167772162 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:32:10.434 10.0.0.2 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@38 -- # ping_ips 1 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:10.434 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:10.435 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:10.435 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.464 ms 00:32:10.435 00:32:10.435 --- 10.0.0.1 ping statistics --- 00:32:10.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.435 rtt min/avg/max/mdev = 0.464/0.464/0.464/0.000 ms 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target0 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:32:10.435 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:10.435 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.228 ms 00:32:10.435 00:32:10.435 --- 10.0.0.2 ping statistics --- 00:32:10.435 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:10.435 rtt min/avg/max/mdev = 0.228/0.228/0.228/0.000 ms 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair++ )) 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@270 -- # return 0 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=initiator1 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # return 1 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev= 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@169 -- # return 0 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target0 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:32:10.435 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # get_net_dev target1 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@107 -- # local dev=target1 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@109 -- # return 1 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@168 -- # dev= 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@169 -- # return 0 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@328 -- # nvmfpid=3456272 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@329 -- # waitforlisten 3456272 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x2 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@835 -- # '[' -z 3456272 ']' 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:10.436 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:10.436 [2024-11-20 10:49:50.409357] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:10.436 [2024-11-20 10:49:50.410313] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:32:10.436 [2024-11-20 10:49:50.410352] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:10.436 [2024-11-20 10:49:50.470861] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.436 [2024-11-20 10:49:50.512684] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:10.436 [2024-11-20 10:49:50.512718] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:10.436 [2024-11-20 10:49:50.512724] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:10.436 [2024-11-20 10:49:50.512730] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:10.436 [2024-11-20 10:49:50.512735] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:10.436 [2024-11-20 10:49:50.513251] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:10.436 [2024-11-20 10:49:50.578283] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:10.436 [2024-11-20 10:49:50.578495] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@868 -- # return 0 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:10.436 [2024-11-20 10:49:50.645901] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@20 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:10.436 [2024-11-20 10:49:50.674118] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:10.436 malloc0 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@28 -- # gen_nvmf_target_json 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:32:10.436 { 00:32:10.436 "params": { 00:32:10.436 "name": "Nvme$subsystem", 00:32:10.436 "trtype": "$TEST_TRANSPORT", 00:32:10.436 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:10.436 "adrfam": "ipv4", 00:32:10.436 "trsvcid": "$NVMF_PORT", 00:32:10.436 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:10.436 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:10.436 "hdgst": ${hdgst:-false}, 00:32:10.436 "ddgst": ${ddgst:-false} 00:32:10.436 }, 00:32:10.436 "method": "bdev_nvme_attach_controller" 00:32:10.436 } 00:32:10.436 EOF 00:32:10.436 )") 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:32:10.436 10:49:50 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:32:10.436 "params": { 00:32:10.436 "name": "Nvme1", 00:32:10.436 "trtype": "tcp", 00:32:10.436 "traddr": "10.0.0.2", 00:32:10.436 "adrfam": "ipv4", 00:32:10.436 "trsvcid": "4420", 00:32:10.436 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:10.436 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:10.436 "hdgst": false, 00:32:10.436 "ddgst": false 00:32:10.436 }, 00:32:10.437 "method": "bdev_nvme_attach_controller" 00:32:10.437 }' 00:32:10.437 [2024-11-20 10:49:50.765326] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:32:10.437 [2024-11-20 10:49:50.765370] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3456302 ] 00:32:10.437 [2024-11-20 10:49:50.839743] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:10.437 [2024-11-20 10:49:50.883624] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:10.437 Running I/O for 10 seconds... 00:32:12.750 8536.00 IOPS, 66.69 MiB/s [2024-11-20T09:49:54.417Z] 8578.50 IOPS, 67.02 MiB/s [2024-11-20T09:49:55.354Z] 8589.67 IOPS, 67.11 MiB/s [2024-11-20T09:49:56.290Z] 8595.50 IOPS, 67.15 MiB/s [2024-11-20T09:49:57.226Z] 8610.80 IOPS, 67.27 MiB/s [2024-11-20T09:49:58.162Z] 8610.67 IOPS, 67.27 MiB/s [2024-11-20T09:49:59.098Z] 8610.57 IOPS, 67.27 MiB/s [2024-11-20T09:50:00.494Z] 8610.62 IOPS, 67.27 MiB/s [2024-11-20T09:50:01.430Z] 8603.44 IOPS, 67.21 MiB/s [2024-11-20T09:50:01.430Z] 8603.70 IOPS, 67.22 MiB/s 00:32:20.699 Latency(us) 00:32:20.699 [2024-11-20T09:50:01.430Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:20.699 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:32:20.699 Verification LBA range: start 0x0 length 0x1000 00:32:20.699 Nvme1n1 : 10.05 8572.15 66.97 0.00 0.00 14831.25 2949.12 44938.97 00:32:20.699 [2024-11-20T09:50:01.430Z] =================================================================================================================== 00:32:20.699 [2024-11-20T09:50:01.430Z] Total : 8572.15 66.97 0.00 0.00 14831.25 2949.12 44938.97 00:32:20.699 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@34 -- # perfpid=3458078 00:32:20.699 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@36 -- # xtrace_disable 00:32:20.699 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:20.699 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:32:20.699 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@32 -- # gen_nvmf_target_json 00:32:20.699 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # config=() 00:32:20.699 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@372 -- # local subsystem config 00:32:20.699 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:32:20.699 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:32:20.699 { 00:32:20.699 "params": { 00:32:20.699 "name": "Nvme$subsystem", 00:32:20.699 "trtype": "$TEST_TRANSPORT", 00:32:20.699 "traddr": "$NVMF_FIRST_TARGET_IP", 00:32:20.699 "adrfam": "ipv4", 00:32:20.699 "trsvcid": "$NVMF_PORT", 00:32:20.699 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:32:20.699 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:32:20.699 "hdgst": ${hdgst:-false}, 00:32:20.699 "ddgst": ${ddgst:-false} 00:32:20.699 }, 00:32:20.699 "method": "bdev_nvme_attach_controller" 00:32:20.699 } 00:32:20.699 EOF 00:32:20.699 )") 00:32:20.699 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@394 -- # cat 00:32:20.699 [2024-11-20 10:50:01.277583] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.699 [2024-11-20 10:50:01.277615] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.699 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@396 -- # jq . 00:32:20.699 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@397 -- # IFS=, 00:32:20.699 10:50:01 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:32:20.699 "params": { 00:32:20.699 "name": "Nvme1", 00:32:20.699 "trtype": "tcp", 00:32:20.699 "traddr": "10.0.0.2", 00:32:20.699 "adrfam": "ipv4", 00:32:20.699 "trsvcid": "4420", 00:32:20.699 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:32:20.699 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:32:20.699 "hdgst": false, 00:32:20.699 "ddgst": false 00:32:20.699 }, 00:32:20.699 "method": "bdev_nvme_attach_controller" 00:32:20.699 }' 00:32:20.699 [2024-11-20 10:50:01.289554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.699 [2024-11-20 10:50:01.289569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.699 [2024-11-20 10:50:01.301545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.699 [2024-11-20 10:50:01.301556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.699 [2024-11-20 10:50:01.313546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.699 [2024-11-20 10:50:01.313556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.699 [2024-11-20 10:50:01.316914] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:32:20.699 [2024-11-20 10:50:01.316957] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3458078 ] 00:32:20.699 [2024-11-20 10:50:01.325546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.699 [2024-11-20 10:50:01.325557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.699 [2024-11-20 10:50:01.337546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.699 [2024-11-20 10:50:01.337558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.699 [2024-11-20 10:50:01.349548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.700 [2024-11-20 10:50:01.349558] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.700 [2024-11-20 10:50:01.361547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.700 [2024-11-20 10:50:01.361557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.700 [2024-11-20 10:50:01.373544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.700 [2024-11-20 10:50:01.373553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.700 [2024-11-20 10:50:01.385546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.700 [2024-11-20 10:50:01.385556] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.700 [2024-11-20 10:50:01.391863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:20.700 [2024-11-20 10:50:01.397545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.700 [2024-11-20 10:50:01.397557] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.700 [2024-11-20 10:50:01.409546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.700 [2024-11-20 10:50:01.409561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.700 [2024-11-20 10:50:01.421547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.700 [2024-11-20 10:50:01.421563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 [2024-11-20 10:50:01.433549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.433563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 [2024-11-20 10:50:01.433658] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:20.958 [2024-11-20 10:50:01.445553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.445568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 [2024-11-20 10:50:01.457554] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.457572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 [2024-11-20 10:50:01.469548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.469561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 [2024-11-20 10:50:01.481545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.481559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 [2024-11-20 10:50:01.493550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.493563] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 [2024-11-20 10:50:01.505545] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.505555] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 [2024-11-20 10:50:01.517560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.517579] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 [2024-11-20 10:50:01.529552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.529569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 [2024-11-20 10:50:01.541551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.541567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 [2024-11-20 10:50:01.553552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.553566] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 [2024-11-20 10:50:01.565555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.565573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 [2024-11-20 10:50:01.577551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.577569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 Running I/O for 5 seconds... 00:32:20.958 [2024-11-20 10:50:01.595407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.595427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 [2024-11-20 10:50:01.610063] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.610082] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 [2024-11-20 10:50:01.625064] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.625083] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 [2024-11-20 10:50:01.639399] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.639417] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 [2024-11-20 10:50:01.654074] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.654098] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 [2024-11-20 10:50:01.669683] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.669703] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:20.958 [2024-11-20 10:50:01.683374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:20.958 [2024-11-20 10:50:01.683393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-20 10:50:01.698803] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-20 10:50:01.698822] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-20 10:50:01.713396] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-20 10:50:01.713416] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-20 10:50:01.726156] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-20 10:50:01.726174] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-20 10:50:01.741260] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-20 10:50:01.741281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-20 10:50:01.754306] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-20 10:50:01.754325] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-20 10:50:01.767012] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-20 10:50:01.767032] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-20 10:50:01.777286] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-20 10:50:01.777305] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-20 10:50:01.791177] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-20 10:50:01.791197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-20 10:50:01.805996] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-20 10:50:01.806014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-20 10:50:01.818180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-20 10:50:01.818198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-20 10:50:01.831370] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-20 10:50:01.831389] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-20 10:50:01.845976] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-20 10:50:01.845994] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-20 10:50:01.860967] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-20 10:50:01.860985] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-20 10:50:01.875144] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-20 10:50:01.875163] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-20 10:50:01.889445] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-20 10:50:01.889464] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-20 10:50:01.900561] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-20 10:50:01.900580] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-20 10:50:01.915184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-20 10:50:01.915217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-20 10:50:01.929920] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-20 10:50:01.929937] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.217 [2024-11-20 10:50:01.942874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.217 [2024-11-20 10:50:01.942894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.477 [2024-11-20 10:50:01.953624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.477 [2024-11-20 10:50:01.953642] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.477 [2024-11-20 10:50:01.967870] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.477 [2024-11-20 10:50:01.967894] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.477 [2024-11-20 10:50:01.982404] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.477 [2024-11-20 10:50:01.982424] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.477 [2024-11-20 10:50:01.997094] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.477 [2024-11-20 10:50:01.997114] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.477 [2024-11-20 10:50:02.010597] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.477 [2024-11-20 10:50:02.010616] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.477 [2024-11-20 10:50:02.025401] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.477 [2024-11-20 10:50:02.025421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.477 [2024-11-20 10:50:02.039551] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.477 [2024-11-20 10:50:02.039571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.477 [2024-11-20 10:50:02.054268] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.477 [2024-11-20 10:50:02.054287] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.477 [2024-11-20 10:50:02.070000] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.477 [2024-11-20 10:50:02.070018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.477 [2024-11-20 10:50:02.085516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.477 [2024-11-20 10:50:02.085539] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.477 [2024-11-20 10:50:02.097646] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.477 [2024-11-20 10:50:02.097667] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.477 [2024-11-20 10:50:02.111532] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.477 [2024-11-20 10:50:02.111553] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.477 [2024-11-20 10:50:02.125930] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.477 [2024-11-20 10:50:02.125948] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.477 [2024-11-20 10:50:02.141143] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.477 [2024-11-20 10:50:02.141162] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.477 [2024-11-20 10:50:02.155475] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.477 [2024-11-20 10:50:02.155494] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.477 [2024-11-20 10:50:02.169707] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.477 [2024-11-20 10:50:02.169726] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.477 [2024-11-20 10:50:02.182490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.477 [2024-11-20 10:50:02.182513] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.477 [2024-11-20 10:50:02.198131] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.477 [2024-11-20 10:50:02.198151] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.737 [2024-11-20 10:50:02.213393] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.737 [2024-11-20 10:50:02.213412] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.737 [2024-11-20 10:50:02.226611] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.737 [2024-11-20 10:50:02.226630] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.737 [2024-11-20 10:50:02.241016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.737 [2024-11-20 10:50:02.241035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.737 [2024-11-20 10:50:02.254458] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.737 [2024-11-20 10:50:02.254476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.737 [2024-11-20 10:50:02.267485] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.737 [2024-11-20 10:50:02.267504] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.737 [2024-11-20 10:50:02.282284] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.737 [2024-11-20 10:50:02.282301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.737 [2024-11-20 10:50:02.297785] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.737 [2024-11-20 10:50:02.297806] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.737 [2024-11-20 10:50:02.310374] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.737 [2024-11-20 10:50:02.310392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.737 [2024-11-20 10:50:02.325024] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.737 [2024-11-20 10:50:02.325043] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.737 [2024-11-20 10:50:02.338302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.737 [2024-11-20 10:50:02.338321] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.737 [2024-11-20 10:50:02.352925] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.737 [2024-11-20 10:50:02.352945] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.737 [2024-11-20 10:50:02.366338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.737 [2024-11-20 10:50:02.366358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.737 [2024-11-20 10:50:02.378750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.737 [2024-11-20 10:50:02.378771] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.737 [2024-11-20 10:50:02.393497] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.737 [2024-11-20 10:50:02.393518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.737 [2024-11-20 10:50:02.407947] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.737 [2024-11-20 10:50:02.407968] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.737 [2024-11-20 10:50:02.422350] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.737 [2024-11-20 10:50:02.422369] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.737 [2024-11-20 10:50:02.432944] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.737 [2024-11-20 10:50:02.432964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.737 [2024-11-20 10:50:02.447290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.737 [2024-11-20 10:50:02.447315] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.737 [2024-11-20 10:50:02.461821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.737 [2024-11-20 10:50:02.461842] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.996 [2024-11-20 10:50:02.473333] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.996 [2024-11-20 10:50:02.473353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.996 [2024-11-20 10:50:02.487091] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.996 [2024-11-20 10:50:02.487110] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.996 [2024-11-20 10:50:02.502391] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.996 [2024-11-20 10:50:02.502411] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.996 [2024-11-20 10:50:02.518248] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.996 [2024-11-20 10:50:02.518268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.996 [2024-11-20 10:50:02.530459] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.996 [2024-11-20 10:50:02.530477] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.996 [2024-11-20 10:50:02.542849] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.996 [2024-11-20 10:50:02.542869] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.996 [2024-11-20 10:50:02.557416] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.996 [2024-11-20 10:50:02.557436] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.996 [2024-11-20 10:50:02.570236] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.996 [2024-11-20 10:50:02.570255] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.996 [2024-11-20 10:50:02.585516] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.996 [2024-11-20 10:50:02.585537] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.996 16828.00 IOPS, 131.47 MiB/s [2024-11-20T09:50:02.727Z] [2024-11-20 10:50:02.599160] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.996 [2024-11-20 10:50:02.599179] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.996 [2024-11-20 10:50:02.613660] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.996 [2024-11-20 10:50:02.613679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.996 [2024-11-20 10:50:02.626142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.996 [2024-11-20 10:50:02.626161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.996 [2024-11-20 10:50:02.639299] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.996 [2024-11-20 10:50:02.639319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.996 [2024-11-20 10:50:02.653655] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.996 [2024-11-20 10:50:02.653674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.996 [2024-11-20 10:50:02.665499] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.996 [2024-11-20 10:50:02.665518] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.996 [2024-11-20 10:50:02.679456] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.996 [2024-11-20 10:50:02.679476] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.996 [2024-11-20 10:50:02.694026] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.996 [2024-11-20 10:50:02.694046] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.996 [2024-11-20 10:50:02.708890] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.996 [2024-11-20 10:50:02.708909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:21.996 [2024-11-20 10:50:02.723671] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:21.996 [2024-11-20 10:50:02.723690] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.256 [2024-11-20 10:50:02.737946] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.256 [2024-11-20 10:50:02.737964] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.256 [2024-11-20 10:50:02.753971] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.256 [2024-11-20 10:50:02.753991] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.256 [2024-11-20 10:50:02.769435] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.256 [2024-11-20 10:50:02.769456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.256 [2024-11-20 10:50:02.783603] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.256 [2024-11-20 10:50:02.783623] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.256 [2024-11-20 10:50:02.798178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.256 [2024-11-20 10:50:02.798197] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.256 [2024-11-20 10:50:02.813048] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.256 [2024-11-20 10:50:02.813067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.256 [2024-11-20 10:50:02.827509] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.256 [2024-11-20 10:50:02.827528] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.256 [2024-11-20 10:50:02.842157] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.256 [2024-11-20 10:50:02.842176] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.256 [2024-11-20 10:50:02.857398] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.256 [2024-11-20 10:50:02.857418] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.256 [2024-11-20 10:50:02.871180] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.256 [2024-11-20 10:50:02.871199] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.256 [2024-11-20 10:50:02.885710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.256 [2024-11-20 10:50:02.885729] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.256 [2024-11-20 10:50:02.896863] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.256 [2024-11-20 10:50:02.896882] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.256 [2024-11-20 10:50:02.911686] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.256 [2024-11-20 10:50:02.911705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.256 [2024-11-20 10:50:02.926408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.256 [2024-11-20 10:50:02.926426] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.256 [2024-11-20 10:50:02.942184] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.256 [2024-11-20 10:50:02.942211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.256 [2024-11-20 10:50:02.957253] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.256 [2024-11-20 10:50:02.957273] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.256 [2024-11-20 10:50:02.971417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.256 [2024-11-20 10:50:02.971435] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.515 [2024-11-20 10:50:02.986188] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.515 [2024-11-20 10:50:02.986211] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.515 [2024-11-20 10:50:02.999366] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.515 [2024-11-20 10:50:02.999385] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.515 [2024-11-20 10:50:03.014237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.515 [2024-11-20 10:50:03.014256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.515 [2024-11-20 10:50:03.028797] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.515 [2024-11-20 10:50:03.028816] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.515 [2024-11-20 10:50:03.044115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.515 [2024-11-20 10:50:03.044135] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.515 [2024-11-20 10:50:03.058689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.515 [2024-11-20 10:50:03.058707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.515 [2024-11-20 10:50:03.073450] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.515 [2024-11-20 10:50:03.073469] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.515 [2024-11-20 10:50:03.087662] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.515 [2024-11-20 10:50:03.087680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.515 [2024-11-20 10:50:03.102738] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.515 [2024-11-20 10:50:03.102757] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.515 [2024-11-20 10:50:03.117192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.515 [2024-11-20 10:50:03.117217] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.515 [2024-11-20 10:50:03.131641] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.515 [2024-11-20 10:50:03.131660] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.515 [2024-11-20 10:50:03.146359] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.515 [2024-11-20 10:50:03.146378] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.515 [2024-11-20 10:50:03.162183] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.515 [2024-11-20 10:50:03.162208] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.515 [2024-11-20 10:50:03.177264] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.515 [2024-11-20 10:50:03.177284] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.515 [2024-11-20 10:50:03.190192] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.515 [2024-11-20 10:50:03.190216] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.515 [2024-11-20 10:50:03.203336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.515 [2024-11-20 10:50:03.203354] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.515 [2024-11-20 10:50:03.218249] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.515 [2024-11-20 10:50:03.218268] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.515 [2024-11-20 10:50:03.233704] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.515 [2024-11-20 10:50:03.233723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.774 [2024-11-20 10:50:03.245521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.774 [2024-11-20 10:50:03.245541] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.774 [2024-11-20 10:50:03.259782] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.774 [2024-11-20 10:50:03.259801] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.774 [2024-11-20 10:50:03.274491] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.774 [2024-11-20 10:50:03.274510] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.774 [2024-11-20 10:50:03.286113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.774 [2024-11-20 10:50:03.286131] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.774 [2024-11-20 10:50:03.298999] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.774 [2024-11-20 10:50:03.299018] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.774 [2024-11-20 10:50:03.309346] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.774 [2024-11-20 10:50:03.309364] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.774 [2024-11-20 10:50:03.323705] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.774 [2024-11-20 10:50:03.323723] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.774 [2024-11-20 10:50:03.338224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.774 [2024-11-20 10:50:03.338242] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.774 [2024-11-20 10:50:03.352954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.774 [2024-11-20 10:50:03.352974] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.774 [2024-11-20 10:50:03.366490] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.774 [2024-11-20 10:50:03.366509] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.774 [2024-11-20 10:50:03.379054] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.774 [2024-11-20 10:50:03.379073] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.774 [2024-11-20 10:50:03.389515] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.774 [2024-11-20 10:50:03.389534] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.774 [2024-11-20 10:50:03.403684] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.774 [2024-11-20 10:50:03.403705] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.774 [2024-11-20 10:50:03.418675] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.774 [2024-11-20 10:50:03.418694] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.774 [2024-11-20 10:50:03.428923] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.774 [2024-11-20 10:50:03.428942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.774 [2024-11-20 10:50:03.443073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.774 [2024-11-20 10:50:03.443093] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.774 [2024-11-20 10:50:03.457851] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.774 [2024-11-20 10:50:03.457870] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.774 [2024-11-20 10:50:03.473390] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.774 [2024-11-20 10:50:03.473409] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:22.774 [2024-11-20 10:50:03.487829] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:22.774 [2024-11-20 10:50:03.487848] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.033 [2024-11-20 10:50:03.502223] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.034 [2024-11-20 10:50:03.502247] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.034 [2024-11-20 10:50:03.517588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.034 [2024-11-20 10:50:03.517607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.034 [2024-11-20 10:50:03.530750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.034 [2024-11-20 10:50:03.530769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.034 [2024-11-20 10:50:03.545712] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.034 [2024-11-20 10:50:03.545732] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.034 [2024-11-20 10:50:03.556727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.034 [2024-11-20 10:50:03.556746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.034 [2024-11-20 10:50:03.571232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.034 [2024-11-20 10:50:03.571251] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.034 [2024-11-20 10:50:03.586162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.034 [2024-11-20 10:50:03.586181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.034 16865.50 IOPS, 131.76 MiB/s [2024-11-20T09:50:03.765Z] [2024-11-20 10:50:03.601209] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.034 [2024-11-20 10:50:03.601229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.034 [2024-11-20 10:50:03.615367] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.034 [2024-11-20 10:50:03.615387] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.034 [2024-11-20 10:50:03.630049] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.034 [2024-11-20 10:50:03.630067] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.034 [2024-11-20 10:50:03.646081] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.034 [2024-11-20 10:50:03.646100] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.034 [2024-11-20 10:50:03.658056] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.034 [2024-11-20 10:50:03.658074] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.034 [2024-11-20 10:50:03.671521] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.034 [2024-11-20 10:50:03.671540] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.034 [2024-11-20 10:50:03.686003] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.034 [2024-11-20 10:50:03.686022] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.034 [2024-11-20 10:50:03.701128] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.035 [2024-11-20 10:50:03.701148] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.035 [2024-11-20 10:50:03.715342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.035 [2024-11-20 10:50:03.715361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.035 [2024-11-20 10:50:03.730408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.035 [2024-11-20 10:50:03.730427] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.035 [2024-11-20 10:50:03.745001] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.035 [2024-11-20 10:50:03.745019] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.035 [2024-11-20 10:50:03.758856] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.035 [2024-11-20 10:50:03.758875] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.294 [2024-11-20 10:50:03.773895] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.294 [2024-11-20 10:50:03.773919] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.294 [2024-11-20 10:50:03.786275] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.294 [2024-11-20 10:50:03.786294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.294 [2024-11-20 10:50:03.799302] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.295 [2024-11-20 10:50:03.799322] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.295 [2024-11-20 10:50:03.814115] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.295 [2024-11-20 10:50:03.814136] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.295 [2024-11-20 10:50:03.829544] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.295 [2024-11-20 10:50:03.829565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.295 [2024-11-20 10:50:03.841219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.295 [2024-11-20 10:50:03.841241] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.295 [2024-11-20 10:50:03.855324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.295 [2024-11-20 10:50:03.855345] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.295 [2024-11-20 10:50:03.870043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.295 [2024-11-20 10:50:03.870062] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.295 [2024-11-20 10:50:03.881387] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.295 [2024-11-20 10:50:03.881407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.295 [2024-11-20 10:50:03.895174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.295 [2024-11-20 10:50:03.895194] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.295 [2024-11-20 10:50:03.910388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.295 [2024-11-20 10:50:03.910408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.295 [2024-11-20 10:50:03.924775] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.295 [2024-11-20 10:50:03.924795] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.295 [2024-11-20 10:50:03.939194] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.295 [2024-11-20 10:50:03.939220] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.295 [2024-11-20 10:50:03.953721] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.295 [2024-11-20 10:50:03.953741] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.295 [2024-11-20 10:50:03.965162] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.295 [2024-11-20 10:50:03.965182] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.295 [2024-11-20 10:50:03.979689] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.295 [2024-11-20 10:50:03.979710] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.295 [2024-11-20 10:50:03.993903] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.295 [2024-11-20 10:50:03.993924] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.295 [2024-11-20 10:50:04.007446] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.295 [2024-11-20 10:50:04.007466] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.022598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.022618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.037368] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.037393] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.051228] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.051248] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.065917] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.065936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.077959] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.077977] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.091565] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.091585] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.105995] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.106014] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.118685] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.118707] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.130963] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.130982] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.141638] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.141658] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.155839] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.155859] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.170232] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.170252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.185619] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.185639] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.198312] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.198332] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.210727] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.210746] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.221468] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.221488] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.235742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.235761] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.250663] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.250683] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.261531] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.261560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.554 [2024-11-20 10:50:04.275293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.554 [2024-11-20 10:50:04.275313] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-11-20 10:50:04.290262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-11-20 10:50:04.290281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-11-20 10:50:04.305589] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-11-20 10:50:04.305608] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-11-20 10:50:04.318938] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-11-20 10:50:04.318956] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-11-20 10:50:04.333303] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-11-20 10:50:04.333323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-11-20 10:50:04.344113] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-11-20 10:50:04.344132] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-11-20 10:50:04.359045] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-11-20 10:50:04.359064] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-11-20 10:50:04.373664] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-11-20 10:50:04.373684] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-11-20 10:50:04.384682] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-11-20 10:50:04.384701] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-11-20 10:50:04.399344] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-11-20 10:50:04.399363] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-11-20 10:50:04.413547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-11-20 10:50:04.413567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-11-20 10:50:04.426388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-11-20 10:50:04.426408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-11-20 10:50:04.441375] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-11-20 10:50:04.441396] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-11-20 10:50:04.455428] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-11-20 10:50:04.455448] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-11-20 10:50:04.469801] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-11-20 10:50:04.469821] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-11-20 10:50:04.480342] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-11-20 10:50:04.480361] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-11-20 10:50:04.495361] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-11-20 10:50:04.495382] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-11-20 10:50:04.509874] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-11-20 10:50:04.509893] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-11-20 10:50:04.522781] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-11-20 10:50:04.522799] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:23.813 [2024-11-20 10:50:04.533910] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:23.813 [2024-11-20 10:50:04.533928] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-11-20 10:50:04.547083] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-11-20 10:50:04.547103] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-11-20 10:50:04.561488] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-11-20 10:50:04.561507] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-11-20 10:50:04.574276] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-11-20 10:50:04.574296] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-11-20 10:50:04.587327] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-11-20 10:50:04.587346] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 16862.33 IOPS, 131.74 MiB/s [2024-11-20T09:50:04.803Z] [2024-11-20 10:50:04.602417] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-11-20 10:50:04.602437] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-11-20 10:50:04.617588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.072 [2024-11-20 10:50:04.617607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.072 [2024-11-20 10:50:04.631158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.073 [2024-11-20 10:50:04.631177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.073 [2024-11-20 10:50:04.646014] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.073 [2024-11-20 10:50:04.646033] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.073 [2024-11-20 10:50:04.657186] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.073 [2024-11-20 10:50:04.657212] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.073 [2024-11-20 10:50:04.671238] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.073 [2024-11-20 10:50:04.671256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.073 [2024-11-20 10:50:04.686059] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.073 [2024-11-20 10:50:04.686078] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.073 [2024-11-20 10:50:04.701304] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.073 [2024-11-20 10:50:04.701323] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.073 [2024-11-20 10:50:04.715743] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.073 [2024-11-20 10:50:04.715762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.073 [2024-11-20 10:50:04.730240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.073 [2024-11-20 10:50:04.730258] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.073 [2024-11-20 10:50:04.746030] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.073 [2024-11-20 10:50:04.746049] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.073 [2024-11-20 10:50:04.761924] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.073 [2024-11-20 10:50:04.761942] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.073 [2024-11-20 10:50:04.774241] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.073 [2024-11-20 10:50:04.774259] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.073 [2024-11-20 10:50:04.787373] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.073 [2024-11-20 10:50:04.787392] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.332 [2024-11-20 10:50:04.802381] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.332 [2024-11-20 10:50:04.802401] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.332 [2024-11-20 10:50:04.817407] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.332 [2024-11-20 10:50:04.817428] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.332 [2024-11-20 10:50:04.830563] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.332 [2024-11-20 10:50:04.830583] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.332 [2024-11-20 10:50:04.842073] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.332 [2024-11-20 10:50:04.842091] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.332 [2024-11-20 10:50:04.855411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.332 [2024-11-20 10:50:04.855430] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.332 [2024-11-20 10:50:04.870043] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.332 [2024-11-20 10:50:04.870061] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.332 [2024-11-20 10:50:04.882219] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.332 [2024-11-20 10:50:04.882237] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.332 [2024-11-20 10:50:04.895437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.332 [2024-11-20 10:50:04.895456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.332 [2024-11-20 10:50:04.910742] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.332 [2024-11-20 10:50:04.910762] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.332 [2024-11-20 10:50:04.925210] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.332 [2024-11-20 10:50:04.925229] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.332 [2024-11-20 10:50:04.937141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.332 [2024-11-20 10:50:04.937160] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.332 [2024-11-20 10:50:04.951224] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.332 [2024-11-20 10:50:04.951244] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.332 [2024-11-20 10:50:04.965928] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.332 [2024-11-20 10:50:04.965946] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.332 [2024-11-20 10:50:04.980796] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.332 [2024-11-20 10:50:04.980815] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.332 [2024-11-20 10:50:04.995624] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.332 [2024-11-20 10:50:04.995644] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.332 [2024-11-20 10:50:05.009984] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.332 [2024-11-20 10:50:05.010003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.332 [2024-11-20 10:50:05.026142] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.332 [2024-11-20 10:50:05.026161] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.332 [2024-11-20 10:50:05.041173] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.332 [2024-11-20 10:50:05.041192] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.332 [2024-11-20 10:50:05.054437] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.332 [2024-11-20 10:50:05.054456] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.592 [2024-11-20 10:50:05.069650] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.592 [2024-11-20 10:50:05.069679] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.592 [2024-11-20 10:50:05.082588] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.592 [2024-11-20 10:50:05.082607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.592 [2024-11-20 10:50:05.097379] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.592 [2024-11-20 10:50:05.097399] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.592 [2024-11-20 10:50:05.110991] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.592 [2024-11-20 10:50:05.111010] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.592 [2024-11-20 10:50:05.121867] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.592 [2024-11-20 10:50:05.121886] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.592 [2024-11-20 10:50:05.135629] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.592 [2024-11-20 10:50:05.135647] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.592 [2024-11-20 10:50:05.150402] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.592 [2024-11-20 10:50:05.150421] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.592 [2024-11-20 10:50:05.166050] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.592 [2024-11-20 10:50:05.166069] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.592 [2024-11-20 10:50:05.181300] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.592 [2024-11-20 10:50:05.181319] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.592 [2024-11-20 10:50:05.194656] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.592 [2024-11-20 10:50:05.194675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.592 [2024-11-20 10:50:05.209331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.592 [2024-11-20 10:50:05.209352] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.592 [2024-11-20 10:50:05.222818] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.592 [2024-11-20 10:50:05.222837] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.592 [2024-11-20 10:50:05.237069] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.592 [2024-11-20 10:50:05.237090] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.592 [2024-11-20 10:50:05.251080] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.592 [2024-11-20 10:50:05.251099] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.592 [2024-11-20 10:50:05.265725] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.592 [2024-11-20 10:50:05.265745] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.592 [2024-11-20 10:50:05.278411] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.592 [2024-11-20 10:50:05.278432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.592 [2024-11-20 10:50:05.293336] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.592 [2024-11-20 10:50:05.293356] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.592 [2024-11-20 10:50:05.304821] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.592 [2024-11-20 10:50:05.304840] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.852 [2024-11-20 10:50:05.319710] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.852 [2024-11-20 10:50:05.319731] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.852 [2024-11-20 10:50:05.334408] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.852 [2024-11-20 10:50:05.334432] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.852 [2024-11-20 10:50:05.350166] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.852 [2024-11-20 10:50:05.350186] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.852 [2024-11-20 10:50:05.362320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.852 [2024-11-20 10:50:05.362339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.852 [2024-11-20 10:50:05.374833] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.852 [2024-11-20 10:50:05.374852] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.852 [2024-11-20 10:50:05.389628] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.852 [2024-11-20 10:50:05.389648] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.852 [2024-11-20 10:50:05.402293] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.852 [2024-11-20 10:50:05.402311] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.852 [2024-11-20 10:50:05.417161] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.852 [2024-11-20 10:50:05.417181] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.852 [2024-11-20 10:50:05.428702] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.852 [2024-11-20 10:50:05.428722] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.852 [2024-11-20 10:50:05.443492] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.852 [2024-11-20 10:50:05.443512] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.852 [2024-11-20 10:50:05.457749] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.852 [2024-11-20 10:50:05.457769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.852 [2024-11-20 10:50:05.470198] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.852 [2024-11-20 10:50:05.470223] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.852 [2024-11-20 10:50:05.483178] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.852 [2024-11-20 10:50:05.483198] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.852 [2024-11-20 10:50:05.493964] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.852 [2024-11-20 10:50:05.493983] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.852 [2024-11-20 10:50:05.507330] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.852 [2024-11-20 10:50:05.507349] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.852 [2024-11-20 10:50:05.521788] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.852 [2024-11-20 10:50:05.521807] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.852 [2024-11-20 10:50:05.535310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.852 [2024-11-20 10:50:05.535330] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.852 [2024-11-20 10:50:05.550089] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.852 [2024-11-20 10:50:05.550108] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:24.852 [2024-11-20 10:50:05.566051] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:24.852 [2024-11-20 10:50:05.566070] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.112 [2024-11-20 10:50:05.581495] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.112 [2024-11-20 10:50:05.581514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.112 [2024-11-20 10:50:05.594467] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.112 [2024-11-20 10:50:05.594491] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.112 16864.00 IOPS, 131.75 MiB/s [2024-11-20T09:50:05.843Z] [2024-11-20 10:50:05.609606] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.112 [2024-11-20 10:50:05.609625] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.112 [2024-11-20 10:50:05.622262] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.112 [2024-11-20 10:50:05.622281] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.112 [2024-11-20 10:50:05.637274] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.112 [2024-11-20 10:50:05.637294] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.112 [2024-11-20 10:50:05.651553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.112 [2024-11-20 10:50:05.651573] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.112 [2024-11-20 10:50:05.666598] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.112 [2024-11-20 10:50:05.666618] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.112 [2024-11-20 10:50:05.681146] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.112 [2024-11-20 10:50:05.681165] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.112 [2024-11-20 10:50:05.695806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.112 [2024-11-20 10:50:05.695825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.112 [2024-11-20 10:50:05.710158] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.112 [2024-11-20 10:50:05.710177] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.112 [2024-11-20 10:50:05.724993] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.112 [2024-11-20 10:50:05.725012] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.112 [2024-11-20 10:50:05.739234] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.112 [2024-11-20 10:50:05.739253] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.112 [2024-11-20 10:50:05.754212] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.112 [2024-11-20 10:50:05.754231] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.112 [2024-11-20 10:50:05.769579] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.112 [2024-11-20 10:50:05.769598] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.112 [2024-11-20 10:50:05.783806] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.112 [2024-11-20 10:50:05.783825] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.112 [2024-11-20 10:50:05.798311] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.112 [2024-11-20 10:50:05.798329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.112 [2024-11-20 10:50:05.813076] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.112 [2024-11-20 10:50:05.813096] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.112 [2024-11-20 10:50:05.825631] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.112 [2024-11-20 10:50:05.825651] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.371 [2024-11-20 10:50:05.839652] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.371 [2024-11-20 10:50:05.839675] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.371 [2024-11-20 10:50:05.854240] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.371 [2024-11-20 10:50:05.854260] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.371 [2024-11-20 10:50:05.869627] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.371 [2024-11-20 10:50:05.869646] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.371 [2024-11-20 10:50:05.882985] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.371 [2024-11-20 10:50:05.883003] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.371 [2024-11-20 10:50:05.898007] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.371 [2024-11-20 10:50:05.898026] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.371 [2024-11-20 10:50:05.914320] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.371 [2024-11-20 10:50:05.914339] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.371 [2024-11-20 10:50:05.930016] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.371 [2024-11-20 10:50:05.930035] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.371 [2024-11-20 10:50:05.942335] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.372 [2024-11-20 10:50:05.942353] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.372 [2024-11-20 10:50:05.955338] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.372 [2024-11-20 10:50:05.955358] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.372 [2024-11-20 10:50:05.969980] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.372 [2024-11-20 10:50:05.969998] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.372 [2024-11-20 10:50:05.984871] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.372 [2024-11-20 10:50:05.984890] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.372 [2024-11-20 10:50:05.999591] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.372 [2024-11-20 10:50:05.999610] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.372 [2024-11-20 10:50:06.013891] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.372 [2024-11-20 10:50:06.013909] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.372 [2024-11-20 10:50:06.027070] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.372 [2024-11-20 10:50:06.027089] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.372 [2024-11-20 10:50:06.041877] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.372 [2024-11-20 10:50:06.041895] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.372 [2024-11-20 10:50:06.057552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.372 [2024-11-20 10:50:06.057571] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.372 [2024-11-20 10:50:06.068331] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.372 [2024-11-20 10:50:06.068350] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.372 [2024-11-20 10:50:06.082496] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.372 [2024-11-20 10:50:06.082514] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.372 [2024-11-20 10:50:06.098088] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.372 [2024-11-20 10:50:06.098106] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.630 [2024-11-20 10:50:06.113292] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.630 [2024-11-20 10:50:06.113312] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.630 [2024-11-20 10:50:06.127258] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.630 [2024-11-20 10:50:06.127277] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.630 [2024-11-20 10:50:06.141914] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.630 [2024-11-20 10:50:06.141932] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.631 [2024-11-20 10:50:06.154827] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.631 [2024-11-20 10:50:06.154847] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.631 [2024-11-20 10:50:06.166141] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.631 [2024-11-20 10:50:06.166159] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.631 [2024-11-20 10:50:06.179216] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.631 [2024-11-20 10:50:06.179234] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.631 [2024-11-20 10:50:06.193765] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.631 [2024-11-20 10:50:06.193784] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.631 [2024-11-20 10:50:06.204290] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.631 [2024-11-20 10:50:06.204309] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.631 [2024-11-20 10:50:06.218897] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.631 [2024-11-20 10:50:06.218915] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.631 [2024-11-20 10:50:06.233750] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.631 [2024-11-20 10:50:06.233769] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.631 [2024-11-20 10:50:06.245549] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.631 [2024-11-20 10:50:06.245570] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.631 [2024-11-20 10:50:06.259310] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.631 [2024-11-20 10:50:06.259329] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.631 [2024-11-20 10:50:06.274388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.631 [2024-11-20 10:50:06.274407] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.631 [2024-11-20 10:50:06.289587] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.631 [2024-11-20 10:50:06.289607] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.631 [2024-11-20 10:50:06.303233] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.631 [2024-11-20 10:50:06.303252] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.631 [2024-11-20 10:50:06.318237] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.631 [2024-11-20 10:50:06.318256] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.631 [2024-11-20 10:50:06.333099] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.631 [2024-11-20 10:50:06.333118] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.631 [2024-11-20 10:50:06.345281] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.631 [2024-11-20 10:50:06.345301] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.890 [2024-11-20 10:50:06.359658] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.890 [2024-11-20 10:50:06.359677] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.890 [2024-11-20 10:50:06.374409] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.890 [2024-11-20 10:50:06.374429] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.890 [2024-11-20 10:50:06.389174] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.890 [2024-11-20 10:50:06.389193] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.890 [2024-11-20 10:50:06.400465] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.891 [2024-11-20 10:50:06.400485] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.891 [2024-11-20 10:50:06.415388] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.891 [2024-11-20 10:50:06.415408] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.891 [2024-11-20 10:50:06.430324] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.891 [2024-11-20 10:50:06.430343] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.891 [2024-11-20 10:50:06.445654] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.891 [2024-11-20 10:50:06.445674] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.891 [2024-11-20 10:50:06.459872] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.891 [2024-11-20 10:50:06.459891] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.891 [2024-11-20 10:50:06.474504] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.891 [2024-11-20 10:50:06.474523] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.891 [2024-11-20 10:50:06.490093] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.891 [2024-11-20 10:50:06.490111] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.891 [2024-11-20 10:50:06.501661] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.891 [2024-11-20 10:50:06.501680] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.891 [2024-11-20 10:50:06.515029] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.891 [2024-11-20 10:50:06.515048] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.891 [2024-11-20 10:50:06.525934] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.891 [2024-11-20 10:50:06.525952] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.891 [2024-11-20 10:50:06.541550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.891 [2024-11-20 10:50:06.541569] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.891 [2024-11-20 10:50:06.552916] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.891 [2024-11-20 10:50:06.552936] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.891 [2024-11-20 10:50:06.566954] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.891 [2024-11-20 10:50:06.566973] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.891 [2024-11-20 10:50:06.581577] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.891 [2024-11-20 10:50:06.581596] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.891 [2024-11-20 10:50:06.594665] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.891 [2024-11-20 10:50:06.594685] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.891 16850.60 IOPS, 131.65 MiB/s [2024-11-20T09:50:06.622Z] [2024-11-20 10:50:06.605553] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.891 [2024-11-20 10:50:06.605572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:25.891 00:32:25.891 Latency(us) 00:32:25.891 [2024-11-20T09:50:06.622Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:32:25.891 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:32:25.891 Nvme1n1 : 5.01 16851.64 131.65 0.00 0.00 7588.32 1950.48 12607.88 00:32:25.891 [2024-11-20T09:50:06.622Z] =================================================================================================================== 00:32:25.891 [2024-11-20T09:50:06.622Z] Total : 16851.64 131.65 0.00 0.00 7588.32 1950.48 12607.88 00:32:25.891 [2024-11-20 10:50:06.617555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:25.891 [2024-11-20 10:50:06.617572] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.150 [2024-11-20 10:50:06.629555] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.150 [2024-11-20 10:50:06.629568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.150 [2024-11-20 10:50:06.641560] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.150 [2024-11-20 10:50:06.641578] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.150 [2024-11-20 10:50:06.653574] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.150 [2024-11-20 10:50:06.653593] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.150 [2024-11-20 10:50:06.665552] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.150 [2024-11-20 10:50:06.665565] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.150 [2024-11-20 10:50:06.677548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.150 [2024-11-20 10:50:06.677564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.150 [2024-11-20 10:50:06.689550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.150 [2024-11-20 10:50:06.689568] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.150 [2024-11-20 10:50:06.701550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.150 [2024-11-20 10:50:06.701567] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.150 [2024-11-20 10:50:06.713547] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.150 [2024-11-20 10:50:06.713561] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.150 [2024-11-20 10:50:06.725546] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.150 [2024-11-20 10:50:06.725560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.150 [2024-11-20 10:50:06.737550] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.150 [2024-11-20 10:50:06.737564] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.150 [2024-11-20 10:50:06.749548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.150 [2024-11-20 10:50:06.749560] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.150 [2024-11-20 10:50:06.761548] subsystem.c:2123:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:32:26.150 [2024-11-20 10:50:06.761559] nvmf_rpc.c:1517:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:32:26.150 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 37: kill: (3458078) - No such process 00:32:26.150 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@44 -- # wait 3458078 00:32:26.150 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@47 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:32:26.150 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.150 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:26.150 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.150 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@48 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:32:26.150 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.150 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:26.150 delay0 00:32:26.150 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.150 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:32:26.150 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:26.150 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:26.151 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:26.151 10:50:06 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:32:26.409 [2024-11-20 10:50:06.907095] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:32:34.528 Initializing NVMe Controllers 00:32:34.528 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:34.528 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:32:34.528 Initialization complete. Launching workers. 00:32:34.528 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 290, failed: 12750 00:32:34.528 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 12958, failed to submit 82 00:32:34.528 success 12856, unsuccessful 102, failed 0 00:32:34.528 10:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:32:34.528 10:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- target/zcopy.sh@55 -- # nvmftestfini 00:32:34.528 10:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:34.528 10:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@99 -- # sync 00:32:34.528 10:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:34.528 10:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@102 -- # set +e 00:32:34.528 10:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:34.528 10:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:34.528 rmmod nvme_tcp 00:32:34.528 rmmod nvme_fabrics 00:32:34.528 rmmod nvme_keyring 00:32:34.528 10:50:13 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@106 -- # set -e 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@107 -- # return 0 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@336 -- # '[' -n 3456272 ']' 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@337 -- # killprocess 3456272 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@954 -- # '[' -z 3456272 ']' 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@958 -- # kill -0 3456272 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # uname 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3456272 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3456272' 00:32:34.528 killing process with pid 3456272 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@973 -- # kill 3456272 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@978 -- # wait 3456272 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@342 -- # nvmf_fini 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@264 -- # local dev 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@267 -- # remove_target_ns 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 15> /dev/null' 00:32:34.528 10:50:14 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@268 -- # delete_main_bridge 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@130 -- # return 0 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # _dev=0 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@41 -- # dev_map=() 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/setup.sh@284 -- # iptr 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@542 -- # iptables-save 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- nvmf/common.sh@542 -- # iptables-restore 00:32:35.907 00:32:35.907 real 0m32.215s 00:32:35.907 user 0m41.556s 00:32:35.907 sys 0m12.827s 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:32:35.907 ************************************ 00:32:35.907 END TEST nvmf_zcopy 00:32:35.907 ************************************ 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- nvmf/nvmf_target_core.sh@38 -- # trap - SIGINT SIGTERM EXIT 00:32:35.907 00:32:35.907 real 4m26.883s 00:32:35.907 user 9m4.963s 00:32:35.907 sys 1m47.759s 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:35.907 10:50:16 nvmf_tcp.nvmf_target_core_interrupt_mode -- common/autotest_common.sh@10 -- # set +x 00:32:35.907 ************************************ 00:32:35.907 END TEST nvmf_target_core_interrupt_mode 00:32:35.907 ************************************ 00:32:35.907 10:50:16 nvmf_tcp -- nvmf/nvmf.sh@17 -- # run_test nvmf_interrupt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:35.907 10:50:16 nvmf_tcp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:32:35.907 10:50:16 nvmf_tcp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:35.907 10:50:16 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:35.907 ************************************ 00:32:35.907 START TEST nvmf_interrupt 00:32:35.907 ************************************ 00:32:35.907 10:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/interrupt.sh --transport=tcp --interrupt-mode 00:32:35.907 * Looking for test storage... 00:32:35.907 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:32:35.907 10:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:35.907 10:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lcov --version 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # IFS=.-: 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@336 -- # read -ra ver1 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # IFS=.-: 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@337 -- # read -ra ver2 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@338 -- # local 'op=<' 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@340 -- # ver1_l=2 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@341 -- # ver2_l=1 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@344 -- # case "$op" in 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@345 -- # : 1 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # decimal 1 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=1 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 1 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@365 -- # ver1[v]=1 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # decimal 2 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@353 -- # local d=2 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@355 -- # echo 2 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@366 -- # ver2[v]=2 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@368 -- # return 0 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:35.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.908 --rc genhtml_branch_coverage=1 00:32:35.908 --rc genhtml_function_coverage=1 00:32:35.908 --rc genhtml_legend=1 00:32:35.908 --rc geninfo_all_blocks=1 00:32:35.908 --rc geninfo_unexecuted_blocks=1 00:32:35.908 00:32:35.908 ' 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:35.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.908 --rc genhtml_branch_coverage=1 00:32:35.908 --rc genhtml_function_coverage=1 00:32:35.908 --rc genhtml_legend=1 00:32:35.908 --rc geninfo_all_blocks=1 00:32:35.908 --rc geninfo_unexecuted_blocks=1 00:32:35.908 00:32:35.908 ' 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:35.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.908 --rc genhtml_branch_coverage=1 00:32:35.908 --rc genhtml_function_coverage=1 00:32:35.908 --rc genhtml_legend=1 00:32:35.908 --rc geninfo_all_blocks=1 00:32:35.908 --rc geninfo_unexecuted_blocks=1 00:32:35.908 00:32:35.908 ' 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:35.908 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:35.908 --rc genhtml_branch_coverage=1 00:32:35.908 --rc genhtml_function_coverage=1 00:32:35.908 --rc genhtml_legend=1 00:32:35.908 --rc geninfo_all_blocks=1 00:32:35.908 --rc geninfo_unexecuted_blocks=1 00:32:35.908 00:32:35.908 ' 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # uname -s 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@15 -- # shopt -s extglob 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@5 -- # export PATH 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@50 -- # : 0 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@31 -- # '[' 1 -eq 1 ']' 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@32 -- # NVMF_APP+=(--interrupt-mode) 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/interrupt/common.sh 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@12 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@14 -- # nvmftestinit 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@296 -- # prepare_net_devs 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@258 -- # local -g is_hw=no 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@260 -- # remove_target_ns 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@125 -- # xtrace_disable 00:32:35.908 10:50:16 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@131 -- # pci_devs=() 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@131 -- # local -a pci_devs 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@132 -- # pci_net_devs=() 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@133 -- # pci_drivers=() 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@133 -- # local -A pci_drivers 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@135 -- # net_devs=() 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@135 -- # local -ga net_devs 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@136 -- # e810=() 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@136 -- # local -ga e810 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@137 -- # x722=() 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@137 -- # local -ga x722 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@138 -- # mlx=() 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@138 -- # local -ga mlx 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:32:42.494 Found 0000:86:00.0 (0x8086 - 0x159b) 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:32:42.494 Found 0000:86:00.1 (0x8086 - 0x159b) 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.494 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:32:42.495 Found net devices under 0000:86:00.0: cvl_0_0 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@234 -- # [[ up == up ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:32:42.495 Found net devices under 0000:86:00.1: cvl_0_1 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@262 -- # is_hw=yes 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@257 -- # create_target_ns 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@27 -- # local -gA dev_map 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@28 -- # local -g _dev 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # ips=() 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772161 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:32:42.495 10.0.0.1 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@11 -- # local val=167772162 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:32:42.495 10.0.0.2 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@38 -- # ping_ips 1 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:42.495 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:32:42.496 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:32:42.496 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.489 ms 00:32:42.496 00:32:42.496 --- 10.0.0.1 ping statistics --- 00:32:42.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.496 rtt min/avg/max/mdev = 0.489/0.489/0.489/0.000 ms 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=target0 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:32:42.496 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:32:42.496 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:32:42.496 00:32:42.496 --- 10.0.0.2 ping statistics --- 00:32:42.496 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:32:42.496 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # (( pair++ )) 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@270 -- # return 0 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=initiator0 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=initiator1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # return 1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev= 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@169 -- # return 0 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev target0 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=target0 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # get_net_dev target1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@107 -- # local dev=target1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@109 -- # return 1 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@168 -- # dev= 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@169 -- # return 0 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@15 -- # nvmfappstart -m 0x3 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@328 -- # nvmfpid=3463521 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@329 -- # waitforlisten 3463521 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --interrupt-mode -m 0x3 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@835 -- # '[' -z 3463521 ']' 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:42.496 10:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:42.496 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:42.497 10:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:42.497 10:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:42.497 [2024-11-20 10:50:22.710758] thread.c:2977:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:32:42.497 [2024-11-20 10:50:22.711739] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:32:42.497 [2024-11-20 10:50:22.711776] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:32:42.497 [2024-11-20 10:50:22.790629] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:42.497 [2024-11-20 10:50:22.831661] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:32:42.497 [2024-11-20 10:50:22.831696] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:32:42.497 [2024-11-20 10:50:22.831704] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:32:42.497 [2024-11-20 10:50:22.831710] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:32:42.497 [2024-11-20 10:50:22.831716] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:32:42.497 [2024-11-20 10:50:22.832905] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:42.497 [2024-11-20 10:50:22.832909] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:42.497 [2024-11-20 10:50:22.900448] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:32:42.497 [2024-11-20 10:50:22.901060] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:32:42.497 [2024-11-20 10:50:22.901266] thread.c:2115:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:32:42.497 10:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:42.497 10:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@868 -- # return 0 00:32:42.497 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:32:42.497 10:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:42.497 10:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:42.497 10:50:22 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:32:42.497 10:50:22 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@16 -- # setup_bdev_aio 00:32:42.497 10:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # uname -s 00:32:42.497 10:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@77 -- # [[ Linux != \F\r\e\e\B\S\D ]] 00:32:42.497 10:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@78 -- # dd if=/dev/zero of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile bs=2048 count=5000 00:32:42.497 5000+0 records in 00:32:42.497 5000+0 records out 00:32:42.497 10240000 bytes (10 MB, 9.8 MiB) copied, 0.0167645 s, 611 MB/s 00:32:42.497 10:50:22 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@79 -- # rpc_cmd bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aiofile AIO0 2048 00:32:42.497 10:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.497 10:50:22 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:42.497 AIO0 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -q 256 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:42.497 [2024-11-20 10:50:23.045795] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 AIO0 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@563 -- # xtrace_disable 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:42.497 [2024-11-20 10:50:23.086171] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3463521 0 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3463521 0 idle 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3463521 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3463521 -w 256 00:32:42.497 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3463521 root 20 0 128.2g 46848 34560 S 6.7 0.0 0:00.26 reactor_0' 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3463521 root 20 0 128.2g 46848 34560 S 6.7 0.0 0:00.26 reactor_0 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.7 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@24 -- # for i in {0..1} 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@25 -- # reactor_is_idle 3463521 1 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3463521 1 idle 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3463521 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3463521 -w 256 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3463557 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1' 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3463557 root 20 0 128.2g 46848 34560 S 0.0 0.0 0:00.00 reactor_1 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@28 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@35 -- # perf_pid=3463773 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 256 -o 4096 -w randrw -M 30 -t 10 -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3463521 0 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3463521 0 busy 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3463521 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3463521 -w 256 00:32:42.757 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:43.017 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3463521 root 20 0 128.2g 47616 34560 R 13.3 0.0 0:00.28 reactor_0' 00:32:43.017 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3463521 root 20 0 128.2g 47616 34560 R 13.3 0.0 0:00.28 reactor_0 00:32:43.017 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:43.017 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:43.017 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=13.3 00:32:43.017 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=13 00:32:43.017 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:43.017 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:43.017 10:50:23 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@31 -- # sleep 1 00:32:43.953 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j-- )) 00:32:43.953 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:43.953 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3463521 -w 256 00:32:43.953 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3463521 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:02.64 reactor_0' 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3463521 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:02.64 reactor_0 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@38 -- # for i in {0..1} 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # BUSY_THRESHOLD=30 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@39 -- # reactor_is_busy 3463521 1 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@49 -- # reactor_is_busy_or_idle 3463521 1 busy 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3463521 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=busy 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=30 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ busy != \b\u\s\y ]] 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3463521 -w 256 00:32:44.212 10:50:24 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:44.472 10:50:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3463557 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:01.38 reactor_1' 00:32:44.472 10:50:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3463557 root 20 0 128.2g 47616 34560 R 99.9 0.0 0:01.38 reactor_1 00:32:44.472 10:50:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:44.472 10:50:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:44.472 10:50:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=99.9 00:32:44.472 10:50:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=99 00:32:44.472 10:50:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ busy = \b\u\s\y ]] 00:32:44.472 10:50:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # (( cpu_rate < busy_threshold )) 00:32:44.472 10:50:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ busy = \i\d\l\e ]] 00:32:44.472 10:50:25 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:44.472 10:50:25 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@42 -- # wait 3463773 00:32:54.452 Initializing NVMe Controllers 00:32:54.452 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:32:54.452 Controller IO queue size 256, less than required. 00:32:54.452 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:32:54.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:32:54.452 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:32:54.452 Initialization complete. Launching workers. 00:32:54.452 ======================================================== 00:32:54.452 Latency(us) 00:32:54.452 Device Information : IOPS MiB/s Average min max 00:32:54.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 16342.34 63.84 15673.36 2991.63 56391.34 00:32:54.452 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 16524.03 64.55 15497.30 7591.31 57047.52 00:32:54.452 ======================================================== 00:32:54.452 Total : 32866.37 128.38 15584.84 2991.63 57047.52 00:32:54.452 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3463521 0 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3463521 0 idle 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3463521 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3463521 -w 256 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3463521 root 20 0 128.2g 47616 34560 S 6.2 0.0 0:20.25 reactor_0' 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3463521 root 20 0 128.2g 47616 34560 S 6.2 0.0 0:20.25 reactor_0 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=6.2 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=6 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@45 -- # for i in {0..1} 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@46 -- # reactor_is_idle 3463521 1 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3463521 1 idle 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3463521 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3463521 -w 256 00:32:54.452 10:50:33 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:54.452 10:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3463557 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1' 00:32:54.452 10:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3463557 root 20 0 128.2g 47616 34560 S 0.0 0.0 0:10.00 reactor_1 00:32:54.453 10:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:54.453 10:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:54.453 10:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:54.453 10:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:54.453 10:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:54.453 10:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:54.453 10:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:54.453 10:50:34 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:54.453 10:50:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@50 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:32:54.453 10:50:34 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@51 -- # waitforserial SPDKISFASTANDAWESOME 00:32:54.453 10:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1202 -- # local i=0 00:32:54.453 10:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1203 -- # local nvme_device_counter=1 nvme_devices=0 00:32:54.453 10:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1204 -- # [[ -n '' ]] 00:32:54.453 10:50:34 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1209 -- # sleep 2 00:32:55.831 10:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1210 -- # (( i++ <= 15 )) 00:32:55.831 10:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # lsblk -l -o NAME,SERIAL 00:32:55.831 10:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # grep -c SPDKISFASTANDAWESOME 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1211 -- # nvme_devices=1 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # (( nvme_devices == nvme_device_counter )) 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1212 -- # return 0 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3463521 0 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3463521 0 idle 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3463521 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=0 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3463521 -w 256 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_0 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3463521 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.51 reactor_0' 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3463521 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:20.51 reactor_0 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@52 -- # for i in {0..1} 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@53 -- # reactor_is_idle 3463521 1 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@53 -- # reactor_is_busy_or_idle 3463521 1 idle 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@10 -- # local pid=3463521 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@11 -- # local idx=1 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@12 -- # local state=idle 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@13 -- # local busy_threshold=65 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@14 -- # local idle_threshold=30 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \b\u\s\y ]] 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@16 -- # [[ idle != \i\d\l\e ]] 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@20 -- # hash top 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j = 10 )) 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@25 -- # (( j != 0 )) 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top -bHn 1 -p 3463521 -w 256 00:32:56.090 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # grep reactor_1 00:32:56.350 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@26 -- # top_reactor='3463557 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.11 reactor_1' 00:32:56.350 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # echo 3463557 root 20 0 128.2g 73728 34560 S 0.0 0.0 0:10.11 reactor_1 00:32:56.350 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # sed -e 's/^\s*//g' 00:32:56.350 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # awk '{print $9}' 00:32:56.350 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@27 -- # cpu_rate=0.0 00:32:56.350 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@28 -- # cpu_rate=0 00:32:56.350 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@30 -- # [[ idle = \b\u\s\y ]] 00:32:56.350 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # [[ idle = \i\d\l\e ]] 00:32:56.350 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@32 -- # (( cpu_rate > idle_threshold )) 00:32:56.350 10:50:36 nvmf_tcp.nvmf_interrupt -- interrupt/common.sh@35 -- # return 0 00:32:56.350 10:50:36 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@55 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:32:56.609 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@56 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1223 -- # local i=0 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # lsblk -o NAME,SERIAL 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1224 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # lsblk -l -o NAME,SERIAL 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1231 -- # grep -q -w SPDKISFASTANDAWESOME 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1235 -- # return 0 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- target/interrupt.sh@59 -- # nvmftestfini 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@335 -- # nvmfcleanup 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@99 -- # sync 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@102 -- # set +e 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@103 -- # for i in {1..20} 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:32:56.609 rmmod nvme_tcp 00:32:56.609 rmmod nvme_fabrics 00:32:56.609 rmmod nvme_keyring 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@106 -- # set -e 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@107 -- # return 0 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@336 -- # '[' -n 3463521 ']' 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@337 -- # killprocess 3463521 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@954 -- # '[' -z 3463521 ']' 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@958 -- # kill -0 3463521 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # uname 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3463521 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3463521' 00:32:56.609 killing process with pid 3463521 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@973 -- # kill 3463521 00:32:56.609 10:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@978 -- # wait 3463521 00:32:56.916 10:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:32:56.916 10:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@342 -- # nvmf_fini 00:32:56.916 10:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@264 -- # local dev 00:32:56.916 10:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@267 -- # remove_target_ns 00:32:56.916 10:50:37 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:32:56.916 10:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 14> /dev/null' 00:32:56.916 10:50:37 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@22 -- # _remove_target_ns 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@268 -- # delete_main_bridge 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@130 -- # return 0 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # _dev=0 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@41 -- # dev_map=() 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/setup.sh@284 -- # iptr 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@542 -- # iptables-save 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:32:58.889 10:50:39 nvmf_tcp.nvmf_interrupt -- nvmf/common.sh@542 -- # iptables-restore 00:32:58.889 00:32:58.889 real 0m23.179s 00:32:58.890 user 0m39.955s 00:32:58.890 sys 0m8.486s 00:32:58.890 10:50:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:58.890 10:50:39 nvmf_tcp.nvmf_interrupt -- common/autotest_common.sh@10 -- # set +x 00:32:58.890 ************************************ 00:32:58.890 END TEST nvmf_interrupt 00:32:58.890 ************************************ 00:32:59.148 00:32:59.148 real 27m21.580s 00:32:59.148 user 56m47.948s 00:32:59.148 sys 9m19.912s 00:32:59.148 10:50:39 nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:32:59.148 10:50:39 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:59.148 ************************************ 00:32:59.148 END TEST nvmf_tcp 00:32:59.148 ************************************ 00:32:59.148 10:50:39 -- spdk/autotest.sh@285 -- # [[ 0 -eq 0 ]] 00:32:59.148 10:50:39 -- spdk/autotest.sh@286 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:59.148 10:50:39 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:32:59.148 10:50:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:32:59.148 10:50:39 -- common/autotest_common.sh@10 -- # set +x 00:32:59.148 ************************************ 00:32:59.148 START TEST spdkcli_nvmf_tcp 00:32:59.148 ************************************ 00:32:59.148 10:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:32:59.148 * Looking for test storage... 00:32:59.148 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:32:59.148 10:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:32:59.148 10:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lcov --version 00:32:59.148 10:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:32:59.148 10:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:32:59.148 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@344 -- # case "$op" in 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@345 -- # : 1 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # decimal 1 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=1 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 1 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # decimal 2 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@353 -- # local d=2 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@355 -- # echo 2 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@368 -- # return 0 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:32:59.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.149 --rc genhtml_branch_coverage=1 00:32:59.149 --rc genhtml_function_coverage=1 00:32:59.149 --rc genhtml_legend=1 00:32:59.149 --rc geninfo_all_blocks=1 00:32:59.149 --rc geninfo_unexecuted_blocks=1 00:32:59.149 00:32:59.149 ' 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:32:59.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.149 --rc genhtml_branch_coverage=1 00:32:59.149 --rc genhtml_function_coverage=1 00:32:59.149 --rc genhtml_legend=1 00:32:59.149 --rc geninfo_all_blocks=1 00:32:59.149 --rc geninfo_unexecuted_blocks=1 00:32:59.149 00:32:59.149 ' 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:32:59.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.149 --rc genhtml_branch_coverage=1 00:32:59.149 --rc genhtml_function_coverage=1 00:32:59.149 --rc genhtml_legend=1 00:32:59.149 --rc geninfo_all_blocks=1 00:32:59.149 --rc geninfo_unexecuted_blocks=1 00:32:59.149 00:32:59.149 ' 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:32:59.149 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:32:59.149 --rc genhtml_branch_coverage=1 00:32:59.149 --rc genhtml_function_coverage=1 00:32:59.149 --rc genhtml_legend=1 00:32:59.149 --rc geninfo_all_blocks=1 00:32:59.149 --rc geninfo_unexecuted_blocks=1 00:32:59.149 00:32:59.149 ' 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:32:59.149 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@15 -- # shopt -s extglob 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@50 -- # : 0 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:32:59.408 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- nvmf/common.sh@54 -- # have_pci_nics=0 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=3466476 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 3466476 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@835 -- # '[' -z 3466476 ']' 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:32:59.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:32:59.408 10:50:39 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:59.408 [2024-11-20 10:50:39.958617] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:32:59.408 [2024-11-20 10:50:39.958666] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3466476 ] 00:32:59.408 [2024-11-20 10:50:40.033800] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:32:59.409 [2024-11-20 10:50:40.083502] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:32:59.409 [2024-11-20 10:50:40.083504] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:32:59.668 10:50:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:32:59.668 10:50:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@868 -- # return 0 00:32:59.668 10:50:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:32:59.668 10:50:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:32:59.668 10:50:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:59.668 10:50:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:32:59.668 10:50:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:32:59.668 10:50:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:32:59.668 10:50:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:32:59.668 10:50:40 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:32:59.668 10:50:40 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:32:59.668 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:32:59.668 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:32:59.668 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:32:59.668 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:32:59.668 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:32:59.668 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:32:59.668 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:59.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:32:59.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:32:59.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:59.668 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:59.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:32:59.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:59.668 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:59.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:32:59.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:32:59.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:59.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:32:59.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:59.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:32:59.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:32:59.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:32:59.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:32:59.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:32:59.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:32:59.668 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:32:59.668 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:32:59.668 ' 00:33:02.201 [2024-11-20 10:50:42.909739] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:03.577 [2024-11-20 10:50:44.242161] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:33:06.109 [2024-11-20 10:50:46.729757] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:33:08.642 [2024-11-20 10:50:48.912493] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:33:10.021 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:33:10.021 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:33:10.021 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:33:10.021 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:33:10.021 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:33:10.021 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:33:10.021 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:33:10.021 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:10.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:33:10.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:33:10.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:10.021 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:10.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:33:10.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:10.021 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:10.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:33:10.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:33:10.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:10.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:33:10.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:10.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:33:10.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:33:10.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:33:10.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:33:10.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:33:10.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:33:10.021 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:33:10.021 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:33:10.021 10:50:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:33:10.021 10:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:10.021 10:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:10.021 10:50:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:33:10.021 10:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:10.021 10:50:50 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:10.021 10:50:50 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:33:10.021 10:50:50 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:33:10.589 10:50:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:33:10.589 10:50:51 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:33:10.589 10:50:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:33:10.589 10:50:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:10.589 10:50:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:10.589 10:50:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:33:10.589 10:50:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:10.589 10:50:51 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:10.589 10:50:51 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:33:10.589 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:33:10.589 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:10.589 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:33:10.589 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:33:10.589 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:33:10.589 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:33:10.589 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:33:10.589 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:33:10.589 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:33:10.589 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:33:10.589 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:33:10.589 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:33:10.589 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:33:10.589 ' 00:33:17.155 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:33:17.155 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:33:17.155 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:17.155 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:33:17.155 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:33:17.155 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:33:17.155 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:33:17.155 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:33:17.155 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:33:17.155 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:33:17.155 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:33:17.155 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:33:17.155 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:33:17.155 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:33:17.155 10:50:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:33:17.155 10:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:17.155 10:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:17.155 10:50:56 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 3466476 00:33:17.155 10:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3466476 ']' 00:33:17.155 10:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3466476 00:33:17.155 10:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # uname 00:33:17.155 10:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:17.155 10:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3466476 00:33:17.155 10:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:17.155 10:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:17.155 10:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3466476' 00:33:17.155 killing process with pid 3466476 00:33:17.155 10:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@973 -- # kill 3466476 00:33:17.155 10:50:56 spdkcli_nvmf_tcp -- common/autotest_common.sh@978 -- # wait 3466476 00:33:17.155 10:50:57 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:33:17.155 10:50:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:33:17.155 10:50:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 3466476 ']' 00:33:17.155 10:50:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 3466476 00:33:17.155 10:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # '[' -z 3466476 ']' 00:33:17.155 10:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # kill -0 3466476 00:33:17.155 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3466476) - No such process 00:33:17.155 10:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@981 -- # echo 'Process with pid 3466476 is not found' 00:33:17.155 Process with pid 3466476 is not found 00:33:17.155 10:50:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:33:17.155 10:50:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:33:17.155 10:50:57 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:33:17.155 00:33:17.155 real 0m17.353s 00:33:17.155 user 0m38.294s 00:33:17.155 sys 0m0.771s 00:33:17.155 10:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:17.155 10:50:57 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:33:17.155 ************************************ 00:33:17.155 END TEST spdkcli_nvmf_tcp 00:33:17.155 ************************************ 00:33:17.155 10:50:57 -- spdk/autotest.sh@287 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:17.155 10:50:57 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:33:17.155 10:50:57 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:17.155 10:50:57 -- common/autotest_common.sh@10 -- # set +x 00:33:17.155 ************************************ 00:33:17.155 START TEST nvmf_identify_passthru 00:33:17.155 ************************************ 00:33:17.155 10:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:33:17.155 * Looking for test storage... 00:33:17.155 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:17.155 10:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:17.155 10:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lcov --version 00:33:17.155 10:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:17.155 10:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@336 -- # IFS=.-: 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@336 -- # read -ra ver1 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@337 -- # IFS=.-: 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@337 -- # read -ra ver2 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@338 -- # local 'op=<' 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@340 -- # ver1_l=2 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@341 -- # ver2_l=1 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@344 -- # case "$op" in 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@345 -- # : 1 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@365 -- # decimal 1 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=1 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 1 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@365 -- # ver1[v]=1 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@366 -- # decimal 2 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@353 -- # local d=2 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@355 -- # echo 2 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@366 -- # ver2[v]=2 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:17.155 10:50:57 nvmf_identify_passthru -- scripts/common.sh@368 -- # return 0 00:33:17.155 10:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:17.155 10:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:17.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.155 --rc genhtml_branch_coverage=1 00:33:17.155 --rc genhtml_function_coverage=1 00:33:17.155 --rc genhtml_legend=1 00:33:17.155 --rc geninfo_all_blocks=1 00:33:17.155 --rc geninfo_unexecuted_blocks=1 00:33:17.155 00:33:17.155 ' 00:33:17.155 10:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:17.155 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.155 --rc genhtml_branch_coverage=1 00:33:17.155 --rc genhtml_function_coverage=1 00:33:17.156 --rc genhtml_legend=1 00:33:17.156 --rc geninfo_all_blocks=1 00:33:17.156 --rc geninfo_unexecuted_blocks=1 00:33:17.156 00:33:17.156 ' 00:33:17.156 10:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:17.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.156 --rc genhtml_branch_coverage=1 00:33:17.156 --rc genhtml_function_coverage=1 00:33:17.156 --rc genhtml_legend=1 00:33:17.156 --rc geninfo_all_blocks=1 00:33:17.156 --rc geninfo_unexecuted_blocks=1 00:33:17.156 00:33:17.156 ' 00:33:17.156 10:50:57 nvmf_identify_passthru -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:17.156 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:17.156 --rc genhtml_branch_coverage=1 00:33:17.156 --rc genhtml_function_coverage=1 00:33:17.156 --rc genhtml_legend=1 00:33:17.156 --rc geninfo_all_blocks=1 00:33:17.156 --rc geninfo_unexecuted_blocks=1 00:33:17.156 00:33:17.156 ' 00:33:17.156 10:50:57 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.156 10:50:57 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:17.156 10:50:57 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.156 10:50:57 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.156 10:50:57 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.156 10:50:57 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.156 10:50:57 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.156 10:50:57 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.156 10:50:57 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:17.156 10:50:57 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@50 -- # : 0 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:33:17.156 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:17.156 10:50:57 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:17.156 10:50:57 nvmf_identify_passthru -- scripts/common.sh@15 -- # shopt -s extglob 00:33:17.156 10:50:57 nvmf_identify_passthru -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:17.156 10:50:57 nvmf_identify_passthru -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:17.156 10:50:57 nvmf_identify_passthru -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:17.156 10:50:57 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.156 10:50:57 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.156 10:50:57 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.156 10:50:57 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:33:17.156 10:50:57 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:17.156 10:50:57 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@260 -- # remove_target_ns 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:17.156 10:50:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:33:17.156 10:50:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:33:17.156 10:50:57 nvmf_identify_passthru -- nvmf/common.sh@125 -- # xtrace_disable 00:33:17.156 10:50:57 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@131 -- # pci_devs=() 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@131 -- # local -a pci_devs 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@132 -- # pci_net_devs=() 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@133 -- # pci_drivers=() 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@133 -- # local -A pci_drivers 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@135 -- # net_devs=() 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@135 -- # local -ga net_devs 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@136 -- # e810=() 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@136 -- # local -ga e810 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@137 -- # x722=() 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@137 -- # local -ga x722 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@138 -- # mlx=() 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@138 -- # local -ga mlx 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:22.424 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:22.425 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:22.425 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:22.425 Found net devices under 0000:86:00.0: cvl_0_0 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:22.425 Found net devices under 0000:86:00.1: cvl_0_1 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@262 -- # is_hw=yes 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@257 -- # create_target_ns 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@28 -- # local -g _dev 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # ips=() 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:33:22.425 10:51:02 nvmf_identify_passthru -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772161 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:33:22.425 10.0.0.1 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@11 -- # local val=167772162 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:33:22.425 10.0.0.2 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:33:22.425 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:33:22.426 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:33:22.426 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:33:22.426 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:22.426 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:22.426 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:33:22.426 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:33:22.685 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:33:22.685 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:33:22.685 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@38 -- # ping_ips 1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=initiator0 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:22.686 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:22.686 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.234 ms 00:33:22.686 00:33:22.686 --- 10.0.0.1 ping statistics --- 00:33:22.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:22.686 rtt min/avg/max/mdev = 0.234/0.234/0.234/0.000 ms 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev target0 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=target0 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:33:22.686 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:22.686 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.129 ms 00:33:22.686 00:33:22.686 --- 10.0.0.2 ping statistics --- 00:33:22.686 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:22.686 rtt min/avg/max/mdev = 0.129/0.129/0.129/0.000 ms 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # (( pair++ )) 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@270 -- # return 0 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@298 -- # '[' '' == iso ']' 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=initiator0 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=initiator1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # return 1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev= 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@169 -- # return 0 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev target0 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=target0 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:22.686 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:22.687 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:22.687 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:22.687 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # get_net_dev target1 00:33:22.687 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@107 -- # local dev=target1 00:33:22.687 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:33:22.687 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:33:22.687 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@109 -- # return 1 00:33:22.687 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@168 -- # dev= 00:33:22.687 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@169 -- # return 0 00:33:22.687 10:51:03 nvmf_identify_passthru -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:33:22.687 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:22.687 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:22.687 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:22.687 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:22.687 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:22.687 10:51:03 nvmf_identify_passthru -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:22.687 10:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:33:22.687 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:22.687 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:22.687 10:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:33:22.687 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # bdfs=() 00:33:22.687 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1509 -- # local bdfs 00:33:22.687 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:33:22.687 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:33:22.687 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # bdfs=() 00:33:22.687 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1498 -- # local bdfs 00:33:22.687 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:33:22.687 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:33:22.687 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:33:22.946 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1500 -- # (( 1 == 0 )) 00:33:22.946 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:5e:00.0 00:33:22.946 10:51:03 nvmf_identify_passthru -- common/autotest_common.sh@1512 -- # echo 0000:5e:00.0 00:33:22.946 10:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:33:22.946 10:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:33:22.946 10:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:22.946 10:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:33:22.946 10:51:03 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:33:28.218 10:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=PHLN951000C61P6AGN 00:33:28.218 10:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:33:28.218 10:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:33:28.218 10:51:08 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:33:32.409 10:51:12 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:33:32.410 10:51:12 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:33:32.410 10:51:12 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:32.410 10:51:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:32.410 10:51:12 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:33:32.410 10:51:12 nvmf_identify_passthru -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:32.410 10:51:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:32.410 10:51:12 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=3474485 00:33:32.410 10:51:12 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:33:32.410 10:51:12 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:33:32.410 10:51:12 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 3474485 00:33:32.410 10:51:12 nvmf_identify_passthru -- common/autotest_common.sh@835 -- # '[' -z 3474485 ']' 00:33:32.410 10:51:12 nvmf_identify_passthru -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:32.410 10:51:12 nvmf_identify_passthru -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:32.410 10:51:12 nvmf_identify_passthru -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:32.410 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:32.410 10:51:12 nvmf_identify_passthru -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:32.410 10:51:12 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:32.410 [2024-11-20 10:51:12.869126] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:33:32.410 [2024-11-20 10:51:12.869170] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:32.411 [2024-11-20 10:51:12.944505] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:33:32.411 [2024-11-20 10:51:12.987058] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:32.411 [2024-11-20 10:51:12.987096] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:32.411 [2024-11-20 10:51:12.987103] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:32.411 [2024-11-20 10:51:12.987109] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:32.411 [2024-11-20 10:51:12.987114] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:32.411 [2024-11-20 10:51:12.988674] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:33:32.411 [2024-11-20 10:51:12.988708] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:33:32.411 [2024-11-20 10:51:12.988816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:32.411 [2024-11-20 10:51:12.988816] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:33:32.411 10:51:13 nvmf_identify_passthru -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:32.411 10:51:13 nvmf_identify_passthru -- common/autotest_common.sh@868 -- # return 0 00:33:32.411 10:51:13 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:33:32.411 10:51:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.411 10:51:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:32.411 INFO: Log level set to 20 00:33:32.411 INFO: Requests: 00:33:32.411 { 00:33:32.411 "jsonrpc": "2.0", 00:33:32.411 "method": "nvmf_set_config", 00:33:32.411 "id": 1, 00:33:32.411 "params": { 00:33:32.411 "admin_cmd_passthru": { 00:33:32.411 "identify_ctrlr": true 00:33:32.411 } 00:33:32.411 } 00:33:32.411 } 00:33:32.411 00:33:32.411 INFO: response: 00:33:32.411 { 00:33:32.411 "jsonrpc": "2.0", 00:33:32.411 "id": 1, 00:33:32.411 "result": true 00:33:32.411 } 00:33:32.411 00:33:32.412 10:51:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.412 10:51:13 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:33:32.412 10:51:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.412 10:51:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:32.412 INFO: Setting log level to 20 00:33:32.412 INFO: Setting log level to 20 00:33:32.412 INFO: Log level set to 20 00:33:32.412 INFO: Log level set to 20 00:33:32.412 INFO: Requests: 00:33:32.412 { 00:33:32.412 "jsonrpc": "2.0", 00:33:32.412 "method": "framework_start_init", 00:33:32.412 "id": 1 00:33:32.412 } 00:33:32.412 00:33:32.412 INFO: Requests: 00:33:32.412 { 00:33:32.412 "jsonrpc": "2.0", 00:33:32.412 "method": "framework_start_init", 00:33:32.412 "id": 1 00:33:32.412 } 00:33:32.412 00:33:32.412 [2024-11-20 10:51:13.096226] nvmf_tgt.c: 462:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:33:32.412 INFO: response: 00:33:32.412 { 00:33:32.412 "jsonrpc": "2.0", 00:33:32.412 "id": 1, 00:33:32.412 "result": true 00:33:32.412 } 00:33:32.412 00:33:32.412 INFO: response: 00:33:32.412 { 00:33:32.412 "jsonrpc": "2.0", 00:33:32.412 "id": 1, 00:33:32.412 "result": true 00:33:32.412 } 00:33:32.412 00:33:32.412 10:51:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.413 10:51:13 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:33:32.413 10:51:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.413 10:51:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:32.413 INFO: Setting log level to 40 00:33:32.413 INFO: Setting log level to 40 00:33:32.413 INFO: Setting log level to 40 00:33:32.413 [2024-11-20 10:51:13.109539] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:32.413 10:51:13 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:32.413 10:51:13 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:33:32.413 10:51:13 nvmf_identify_passthru -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:32.413 10:51:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:32.680 10:51:13 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:33:32.680 10:51:13 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:32.680 10:51:13 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:35.981 Nvme0n1 00:33:35.981 10:51:15 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.981 10:51:15 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:33:35.981 10:51:15 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.981 10:51:15 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.981 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.981 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:35.981 [2024-11-20 10:51:16.027739] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.981 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:35.981 [ 00:33:35.981 { 00:33:35.981 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:33:35.981 "subtype": "Discovery", 00:33:35.981 "listen_addresses": [], 00:33:35.981 "allow_any_host": true, 00:33:35.981 "hosts": [] 00:33:35.981 }, 00:33:35.981 { 00:33:35.981 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:33:35.981 "subtype": "NVMe", 00:33:35.981 "listen_addresses": [ 00:33:35.981 { 00:33:35.981 "trtype": "TCP", 00:33:35.981 "adrfam": "IPv4", 00:33:35.981 "traddr": "10.0.0.2", 00:33:35.981 "trsvcid": "4420" 00:33:35.981 } 00:33:35.981 ], 00:33:35.981 "allow_any_host": true, 00:33:35.981 "hosts": [], 00:33:35.981 "serial_number": "SPDK00000000000001", 00:33:35.981 "model_number": "SPDK bdev Controller", 00:33:35.981 "max_namespaces": 1, 00:33:35.981 "min_cntlid": 1, 00:33:35.981 "max_cntlid": 65519, 00:33:35.981 "namespaces": [ 00:33:35.981 { 00:33:35.981 "nsid": 1, 00:33:35.981 "bdev_name": "Nvme0n1", 00:33:35.981 "name": "Nvme0n1", 00:33:35.981 "nguid": "6C918AD38B824EF78975D6C6FB7C7764", 00:33:35.981 "uuid": "6c918ad3-8b82-4ef7-8975-d6c6fb7c7764" 00:33:35.981 } 00:33:35.981 ] 00:33:35.981 } 00:33:35.981 ] 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.981 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:35.981 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:33:35.981 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:33:35.981 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=PHLN951000C61P6AGN 00:33:35.981 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:33:35.981 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:33:35.981 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:33:35.981 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:33:35.981 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' PHLN951000C61P6AGN '!=' PHLN951000C61P6AGN ']' 00:33:35.981 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:33:35.981 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:35.981 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:33:35.981 10:51:16 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:33:35.981 10:51:16 nvmf_identify_passthru -- nvmf/common.sh@335 -- # nvmfcleanup 00:33:35.981 10:51:16 nvmf_identify_passthru -- nvmf/common.sh@99 -- # sync 00:33:35.981 10:51:16 nvmf_identify_passthru -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:33:35.981 10:51:16 nvmf_identify_passthru -- nvmf/common.sh@102 -- # set +e 00:33:35.981 10:51:16 nvmf_identify_passthru -- nvmf/common.sh@103 -- # for i in {1..20} 00:33:35.981 10:51:16 nvmf_identify_passthru -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:33:35.981 rmmod nvme_tcp 00:33:35.981 rmmod nvme_fabrics 00:33:35.981 rmmod nvme_keyring 00:33:35.981 10:51:16 nvmf_identify_passthru -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:33:35.981 10:51:16 nvmf_identify_passthru -- nvmf/common.sh@106 -- # set -e 00:33:35.981 10:51:16 nvmf_identify_passthru -- nvmf/common.sh@107 -- # return 0 00:33:35.981 10:51:16 nvmf_identify_passthru -- nvmf/common.sh@336 -- # '[' -n 3474485 ']' 00:33:35.981 10:51:16 nvmf_identify_passthru -- nvmf/common.sh@337 -- # killprocess 3474485 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # '[' -z 3474485 ']' 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # kill -0 3474485 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # uname 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3474485 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3474485' 00:33:35.981 killing process with pid 3474485 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@973 -- # kill 3474485 00:33:35.981 10:51:16 nvmf_identify_passthru -- common/autotest_common.sh@978 -- # wait 3474485 00:33:37.881 10:51:18 nvmf_identify_passthru -- nvmf/common.sh@339 -- # '[' '' == iso ']' 00:33:37.881 10:51:18 nvmf_identify_passthru -- nvmf/common.sh@342 -- # nvmf_fini 00:33:37.881 10:51:18 nvmf_identify_passthru -- nvmf/setup.sh@264 -- # local dev 00:33:37.881 10:51:18 nvmf_identify_passthru -- nvmf/setup.sh@267 -- # remove_target_ns 00:33:37.881 10:51:18 nvmf_identify_passthru -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:37.881 10:51:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:33:37.881 10:51:18 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@268 -- # delete_main_bridge 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@130 -- # return 0 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # _dev=0 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@41 -- # dev_map=() 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/setup.sh@284 -- # iptr 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/common.sh@542 -- # iptables-save 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:33:40.415 10:51:20 nvmf_identify_passthru -- nvmf/common.sh@542 -- # iptables-restore 00:33:40.415 00:33:40.415 real 0m23.493s 00:33:40.415 user 0m29.834s 00:33:40.415 sys 0m6.326s 00:33:40.415 10:51:20 nvmf_identify_passthru -- common/autotest_common.sh@1130 -- # xtrace_disable 00:33:40.415 10:51:20 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:33:40.415 ************************************ 00:33:40.415 END TEST nvmf_identify_passthru 00:33:40.415 ************************************ 00:33:40.415 10:51:20 -- spdk/autotest.sh@289 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:40.415 10:51:20 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:40.415 10:51:20 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:40.415 10:51:20 -- common/autotest_common.sh@10 -- # set +x 00:33:40.415 ************************************ 00:33:40.415 START TEST nvmf_dif 00:33:40.415 ************************************ 00:33:40.415 10:51:20 nvmf_dif -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:33:40.415 * Looking for test storage... 00:33:40.415 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:33:40.415 10:51:20 nvmf_dif -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:33:40.415 10:51:20 nvmf_dif -- common/autotest_common.sh@1693 -- # lcov --version 00:33:40.415 10:51:20 nvmf_dif -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:33:40.415 10:51:20 nvmf_dif -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@333 -- # local ver1 ver1_l 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@334 -- # local ver2 ver2_l 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@336 -- # IFS=.-: 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@336 -- # read -ra ver1 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@337 -- # IFS=.-: 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@337 -- # read -ra ver2 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@338 -- # local 'op=<' 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@340 -- # ver1_l=2 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@341 -- # ver2_l=1 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@344 -- # case "$op" in 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@345 -- # : 1 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@364 -- # (( v = 0 )) 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@365 -- # decimal 1 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@353 -- # local d=1 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@355 -- # echo 1 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@365 -- # ver1[v]=1 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@366 -- # decimal 2 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@353 -- # local d=2 00:33:40.415 10:51:20 nvmf_dif -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:33:40.416 10:51:20 nvmf_dif -- scripts/common.sh@355 -- # echo 2 00:33:40.416 10:51:20 nvmf_dif -- scripts/common.sh@366 -- # ver2[v]=2 00:33:40.416 10:51:20 nvmf_dif -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:33:40.416 10:51:20 nvmf_dif -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:33:40.416 10:51:20 nvmf_dif -- scripts/common.sh@368 -- # return 0 00:33:40.416 10:51:20 nvmf_dif -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:33:40.416 10:51:20 nvmf_dif -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:33:40.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.416 --rc genhtml_branch_coverage=1 00:33:40.416 --rc genhtml_function_coverage=1 00:33:40.416 --rc genhtml_legend=1 00:33:40.416 --rc geninfo_all_blocks=1 00:33:40.416 --rc geninfo_unexecuted_blocks=1 00:33:40.416 00:33:40.416 ' 00:33:40.416 10:51:20 nvmf_dif -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:33:40.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.416 --rc genhtml_branch_coverage=1 00:33:40.416 --rc genhtml_function_coverage=1 00:33:40.416 --rc genhtml_legend=1 00:33:40.416 --rc geninfo_all_blocks=1 00:33:40.416 --rc geninfo_unexecuted_blocks=1 00:33:40.416 00:33:40.416 ' 00:33:40.416 10:51:20 nvmf_dif -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:33:40.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.416 --rc genhtml_branch_coverage=1 00:33:40.416 --rc genhtml_function_coverage=1 00:33:40.416 --rc genhtml_legend=1 00:33:40.416 --rc geninfo_all_blocks=1 00:33:40.416 --rc geninfo_unexecuted_blocks=1 00:33:40.416 00:33:40.416 ' 00:33:40.416 10:51:20 nvmf_dif -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:33:40.416 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:33:40.416 --rc genhtml_branch_coverage=1 00:33:40.416 --rc genhtml_function_coverage=1 00:33:40.416 --rc genhtml_legend=1 00:33:40.416 --rc geninfo_all_blocks=1 00:33:40.416 --rc geninfo_unexecuted_blocks=1 00:33:40.416 00:33:40.416 ' 00:33:40.416 10:51:20 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:33:40.416 10:51:20 nvmf_dif -- scripts/common.sh@15 -- # shopt -s extglob 00:33:40.416 10:51:20 nvmf_dif -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:33:40.416 10:51:20 nvmf_dif -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:33:40.416 10:51:20 nvmf_dif -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:33:40.416 10:51:20 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.416 10:51:20 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.416 10:51:20 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.416 10:51:20 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:33:40.416 10:51:20 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:33:40.416 10:51:20 nvmf_dif -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:33:40.416 10:51:20 nvmf_dif -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:40.416 10:51:20 nvmf_dif -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@50 -- # : 0 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:33:40.416 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@54 -- # have_pci_nics=0 00:33:40.416 10:51:20 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:33:40.416 10:51:20 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:33:40.416 10:51:20 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:33:40.416 10:51:20 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:33:40.416 10:51:20 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@296 -- # prepare_net_devs 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@258 -- # local -g is_hw=no 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@260 -- # remove_target_ns 00:33:40.416 10:51:20 nvmf_dif -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:33:40.416 10:51:20 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:33:40.416 10:51:20 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:33:40.416 10:51:20 nvmf_dif -- nvmf/common.sh@125 -- # xtrace_disable 00:33:40.416 10:51:20 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@131 -- # pci_devs=() 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@131 -- # local -a pci_devs 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@132 -- # pci_net_devs=() 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@133 -- # pci_drivers=() 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@133 -- # local -A pci_drivers 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@135 -- # net_devs=() 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@135 -- # local -ga net_devs 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@136 -- # e810=() 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@136 -- # local -ga e810 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@137 -- # x722=() 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@137 -- # local -ga x722 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@138 -- # mlx=() 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@138 -- # local -ga mlx 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:33:46.987 Found 0000:86:00.0 (0x8086 - 0x159b) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:33:46.987 Found 0000:86:00.1 (0x8086 - 0x159b) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:33:46.987 Found net devices under 0000:86:00.0: cvl_0_0 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@234 -- # [[ up == up ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:33:46.987 Found net devices under 0000:86:00.1: cvl_0_1 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@262 -- # is_hw=yes 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@257 -- # create_target_ns 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@27 -- # local -gA dev_map 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@28 -- # local -g _dev 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@44 -- # ips=() 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:33:46.987 10:51:26 nvmf_dif -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772161 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:33:46.988 10.0.0.1 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@11 -- # local val=167772162 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:33:46.988 10.0.0.2 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:33:46.988 10:51:26 nvmf_dif -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@38 -- # ping_ips 1 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=initiator0 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:33:46.988 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:33:46.988 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.452 ms 00:33:46.988 00:33:46.988 --- 10.0.0.1 ping statistics --- 00:33:46.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.988 rtt min/avg/max/mdev = 0.452/0.452/0.452/0.000 ms 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev target0 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=target0 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:33:46.988 10:51:26 nvmf_dif -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:33:46.988 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:33:46.988 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:33:46.988 00:33:46.988 --- 10.0.0.2 ping statistics --- 00:33:46.988 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:33:46.988 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:33:46.989 10:51:26 nvmf_dif -- nvmf/setup.sh@98 -- # (( pair++ )) 00:33:46.989 10:51:26 nvmf_dif -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:33:46.989 10:51:26 nvmf_dif -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:33:46.989 10:51:26 nvmf_dif -- nvmf/common.sh@270 -- # return 0 00:33:46.989 10:51:26 nvmf_dif -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:33:46.989 10:51:26 nvmf_dif -- nvmf/common.sh@299 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:33:48.894 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:33:48.894 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:33:48.894 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:33:48.894 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:33:48.894 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:33:48.894 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:33:48.894 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:33:48.894 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:33:48.894 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:33:48.894 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:33:48.894 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:33:48.894 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:33:48.894 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:33:48.894 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:33:48.894 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:33:48.894 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:33:48.894 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:33:49.155 10:51:29 nvmf_dif -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=initiator0 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=initiator1 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@109 -- # return 1 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@168 -- # dev= 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@169 -- # return 0 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev target0 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=target0 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@168 -- # get_net_dev target1 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@107 -- # local dev=target1 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@109 -- # return 1 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@168 -- # dev= 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@169 -- # return 0 00:33:49.155 10:51:29 nvmf_dif -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:33:49.155 10:51:29 nvmf_dif -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:33:49.155 10:51:29 nvmf_dif -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:33:49.155 10:51:29 nvmf_dif -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:33:49.155 10:51:29 nvmf_dif -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:33:49.155 10:51:29 nvmf_dif -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:33:49.155 10:51:29 nvmf_dif -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:33:49.155 10:51:29 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:33:49.155 10:51:29 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:33:49.155 10:51:29 nvmf_dif -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:33:49.155 10:51:29 nvmf_dif -- common/autotest_common.sh@726 -- # xtrace_disable 00:33:49.155 10:51:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:49.155 10:51:29 nvmf_dif -- nvmf/common.sh@328 -- # nvmfpid=3480157 00:33:49.155 10:51:29 nvmf_dif -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:33:49.155 10:51:29 nvmf_dif -- nvmf/common.sh@329 -- # waitforlisten 3480157 00:33:49.155 10:51:29 nvmf_dif -- common/autotest_common.sh@835 -- # '[' -z 3480157 ']' 00:33:49.155 10:51:29 nvmf_dif -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:33:49.155 10:51:29 nvmf_dif -- common/autotest_common.sh@840 -- # local max_retries=100 00:33:49.155 10:51:29 nvmf_dif -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:33:49.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:33:49.155 10:51:29 nvmf_dif -- common/autotest_common.sh@844 -- # xtrace_disable 00:33:49.155 10:51:29 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:49.414 [2024-11-20 10:51:29.883542] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:33:49.414 [2024-11-20 10:51:29.883594] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:33:49.414 [2024-11-20 10:51:29.962835] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:49.414 [2024-11-20 10:51:30.003419] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:33:49.414 [2024-11-20 10:51:30.003454] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:33:49.414 [2024-11-20 10:51:30.003461] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:33:49.414 [2024-11-20 10:51:30.003467] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:33:49.414 [2024-11-20 10:51:30.003473] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:33:49.414 [2024-11-20 10:51:30.003888] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:33:49.414 10:51:30 nvmf_dif -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:33:49.414 10:51:30 nvmf_dif -- common/autotest_common.sh@868 -- # return 0 00:33:49.414 10:51:30 nvmf_dif -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:33:49.414 10:51:30 nvmf_dif -- common/autotest_common.sh@732 -- # xtrace_disable 00:33:49.414 10:51:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:49.414 10:51:30 nvmf_dif -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:33:49.414 10:51:30 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:33:49.414 10:51:30 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:33:49.414 10:51:30 nvmf_dif -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.414 10:51:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:49.414 [2024-11-20 10:51:30.136017] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:33:49.414 10:51:30 nvmf_dif -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.414 10:51:30 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:33:49.414 10:51:30 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:33:49.673 10:51:30 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:33:49.673 10:51:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:33:49.673 ************************************ 00:33:49.673 START TEST fio_dif_1_default 00:33:49.673 ************************************ 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1129 -- # fio_dif_1 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:49.673 bdev_null0 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:33:49.673 [2024-11-20 10:51:30.208340] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # config=() 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@372 -- # local subsystem config 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:33:49.673 { 00:33:49.673 "params": { 00:33:49.673 "name": "Nvme$subsystem", 00:33:49.673 "trtype": "$TEST_TRANSPORT", 00:33:49.673 "traddr": "$NVMF_FIRST_TARGET_IP", 00:33:49.673 "adrfam": "ipv4", 00:33:49.673 "trsvcid": "$NVMF_PORT", 00:33:49.673 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:33:49.673 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:33:49.673 "hdgst": ${hdgst:-false}, 00:33:49.673 "ddgst": ${ddgst:-false} 00:33:49.673 }, 00:33:49.673 "method": "bdev_nvme_attach_controller" 00:33:49.673 } 00:33:49.673 EOF 00:33:49.673 )") 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local sanitizers 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # shift 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1347 -- # local asan_lib= 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@394 -- # cat 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libasan 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@396 -- # jq . 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@397 -- # IFS=, 00:33:49.673 10:51:30 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:33:49.673 "params": { 00:33:49.673 "name": "Nvme0", 00:33:49.673 "trtype": "tcp", 00:33:49.673 "traddr": "10.0.0.2", 00:33:49.673 "adrfam": "ipv4", 00:33:49.673 "trsvcid": "4420", 00:33:49.673 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:33:49.673 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:33:49.674 "hdgst": false, 00:33:49.674 "ddgst": false 00:33:49.674 }, 00:33:49.674 "method": "bdev_nvme_attach_controller" 00:33:49.674 }' 00:33:49.674 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:49.674 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:49.674 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:33:49.674 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:33:49.674 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:33:49.674 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:33:49.674 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1349 -- # asan_lib= 00:33:49.674 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:33:49.674 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:33:49.674 10:51:30 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:33:49.932 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:33:49.932 fio-3.35 00:33:49.932 Starting 1 thread 00:34:02.144 00:34:02.144 filename0: (groupid=0, jobs=1): err= 0: pid=3480362: Wed Nov 20 10:51:41 2024 00:34:02.144 read: IOPS=98, BW=395KiB/s (404kB/s)(3952KiB/10012msec) 00:34:02.144 slat (nsec): min=5677, max=26820, avg=6168.00, stdev=1406.83 00:34:02.144 clat (usec): min=370, max=42965, avg=40513.31, stdev=4454.67 00:34:02.144 lat (usec): min=375, max=42992, avg=40519.48, stdev=4454.65 00:34:02.144 clat percentiles (usec): 00:34:02.144 | 1.00th=[ 404], 5.00th=[40633], 10.00th=[41157], 20.00th=[41157], 00:34:02.144 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:02.144 | 70.00th=[41157], 80.00th=[41157], 90.00th=[41157], 95.00th=[41157], 00:34:02.144 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42730], 99.95th=[42730], 00:34:02.144 | 99.99th=[42730] 00:34:02.144 bw ( KiB/s): min= 384, max= 480, per=99.56%, avg=393.60, stdev=23.45, samples=20 00:34:02.144 iops : min= 96, max= 120, avg=98.40, stdev= 5.86, samples=20 00:34:02.144 lat (usec) : 500=1.21% 00:34:02.144 lat (msec) : 50=98.79% 00:34:02.144 cpu : usr=92.38%, sys=7.35%, ctx=34, majf=0, minf=0 00:34:02.144 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:02.144 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.144 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:02.144 issued rwts: total=988,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:02.144 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:02.144 00:34:02.144 Run status group 0 (all jobs): 00:34:02.144 READ: bw=395KiB/s (404kB/s), 395KiB/s-395KiB/s (404kB/s-404kB/s), io=3952KiB (4047kB), run=10012-10012msec 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.144 00:34:02.144 real 0m11.092s 00:34:02.144 user 0m16.029s 00:34:02.144 sys 0m1.026s 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:34:02.144 ************************************ 00:34:02.144 END TEST fio_dif_1_default 00:34:02.144 ************************************ 00:34:02.144 10:51:41 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:34:02.144 10:51:41 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:02.144 10:51:41 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:02.144 10:51:41 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:02.144 ************************************ 00:34:02.144 START TEST fio_dif_1_multi_subsystems 00:34:02.144 ************************************ 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1129 -- # fio_dif_1_multi_subsystems 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.144 bdev_null0 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.144 [2024-11-20 10:51:41.370603] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.144 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.145 bdev_null1 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # config=() 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@372 -- # local subsystem config 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:34:02.145 { 00:34:02.145 "params": { 00:34:02.145 "name": "Nvme$subsystem", 00:34:02.145 "trtype": "$TEST_TRANSPORT", 00:34:02.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:02.145 "adrfam": "ipv4", 00:34:02.145 "trsvcid": "$NVMF_PORT", 00:34:02.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:02.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:02.145 "hdgst": ${hdgst:-false}, 00:34:02.145 "ddgst": ${ddgst:-false} 00:34:02.145 }, 00:34:02.145 "method": "bdev_nvme_attach_controller" 00:34:02.145 } 00:34:02.145 EOF 00:34:02.145 )") 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # shift 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libasan 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:34:02.145 { 00:34:02.145 "params": { 00:34:02.145 "name": "Nvme$subsystem", 00:34:02.145 "trtype": "$TEST_TRANSPORT", 00:34:02.145 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:02.145 "adrfam": "ipv4", 00:34:02.145 "trsvcid": "$NVMF_PORT", 00:34:02.145 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:02.145 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:02.145 "hdgst": ${hdgst:-false}, 00:34:02.145 "ddgst": ${ddgst:-false} 00:34:02.145 }, 00:34:02.145 "method": "bdev_nvme_attach_controller" 00:34:02.145 } 00:34:02.145 EOF 00:34:02.145 )") 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@394 -- # cat 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@396 -- # jq . 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@397 -- # IFS=, 00:34:02.145 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:34:02.145 "params": { 00:34:02.145 "name": "Nvme0", 00:34:02.145 "trtype": "tcp", 00:34:02.145 "traddr": "10.0.0.2", 00:34:02.145 "adrfam": "ipv4", 00:34:02.145 "trsvcid": "4420", 00:34:02.145 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:02.145 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:02.145 "hdgst": false, 00:34:02.145 "ddgst": false 00:34:02.145 }, 00:34:02.145 "method": "bdev_nvme_attach_controller" 00:34:02.145 },{ 00:34:02.145 "params": { 00:34:02.145 "name": "Nvme1", 00:34:02.145 "trtype": "tcp", 00:34:02.145 "traddr": "10.0.0.2", 00:34:02.145 "adrfam": "ipv4", 00:34:02.145 "trsvcid": "4420", 00:34:02.145 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:02.145 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:02.145 "hdgst": false, 00:34:02.145 "ddgst": false 00:34:02.145 }, 00:34:02.145 "method": "bdev_nvme_attach_controller" 00:34:02.145 }' 00:34:02.146 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:02.146 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:02.146 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:02.146 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:02.146 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:02.146 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:02.146 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:02.146 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:02.146 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:02.146 10:51:41 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:02.146 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:02.146 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:34:02.146 fio-3.35 00:34:02.146 Starting 2 threads 00:34:12.124 00:34:12.124 filename0: (groupid=0, jobs=1): err= 0: pid=3482323: Wed Nov 20 10:51:52 2024 00:34:12.124 read: IOPS=96, BW=387KiB/s (396kB/s)(3872KiB/10013msec) 00:34:12.124 slat (nsec): min=5888, max=42384, avg=10904.35, stdev=7088.14 00:34:12.124 clat (usec): min=40810, max=42085, avg=41338.86, stdev=477.76 00:34:12.124 lat (usec): min=40817, max=42128, avg=41349.76, stdev=477.67 00:34:12.124 clat percentiles (usec): 00:34:12.124 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:12.124 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:12.124 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:12.124 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:12.124 | 99.99th=[42206] 00:34:12.124 bw ( KiB/s): min= 352, max= 416, per=49.60%, avg=385.60, stdev=12.61, samples=20 00:34:12.124 iops : min= 88, max= 104, avg=96.40, stdev= 3.15, samples=20 00:34:12.124 lat (msec) : 50=100.00% 00:34:12.124 cpu : usr=97.75%, sys=1.98%, ctx=15, majf=0, minf=69 00:34:12.124 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:12.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.124 issued rwts: total=968,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.124 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:12.124 filename1: (groupid=0, jobs=1): err= 0: pid=3482324: Wed Nov 20 10:51:52 2024 00:34:12.124 read: IOPS=97, BW=390KiB/s (399kB/s)(3904KiB/10017msec) 00:34:12.124 slat (nsec): min=5986, max=87862, avg=12124.53, stdev=9426.35 00:34:12.124 clat (usec): min=421, max=42235, avg=41011.26, stdev=3721.15 00:34:12.124 lat (usec): min=428, max=42247, avg=41023.39, stdev=3721.20 00:34:12.124 clat percentiles (usec): 00:34:12.124 | 1.00th=[40633], 5.00th=[41157], 10.00th=[41157], 20.00th=[41157], 00:34:12.124 | 30.00th=[41157], 40.00th=[41157], 50.00th=[41157], 60.00th=[41157], 00:34:12.124 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:34:12.124 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:34:12.124 | 99.99th=[42206] 00:34:12.124 bw ( KiB/s): min= 352, max= 416, per=49.98%, avg=388.80, stdev=15.66, samples=20 00:34:12.124 iops : min= 88, max= 104, avg=97.20, stdev= 3.91, samples=20 00:34:12.124 lat (usec) : 500=0.82% 00:34:12.124 lat (msec) : 50=99.18% 00:34:12.124 cpu : usr=98.81%, sys=0.89%, ctx=28, majf=0, minf=141 00:34:12.124 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:12.124 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.124 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:12.124 issued rwts: total=976,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:12.124 latency : target=0, window=0, percentile=100.00%, depth=4 00:34:12.124 00:34:12.124 Run status group 0 (all jobs): 00:34:12.124 READ: bw=776KiB/s (795kB/s), 387KiB/s-390KiB/s (396kB/s-399kB/s), io=7776KiB (7963kB), run=10013-10017msec 00:34:12.124 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:34:12.124 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:34:12.124 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:12.124 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:12.124 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:34:12.124 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:12.124 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.125 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.125 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.125 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:12.125 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.125 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.125 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.125 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:34:12.125 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:12.125 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:34:12.125 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:12.125 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.125 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.125 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.125 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:12.125 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.125 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.125 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.125 00:34:12.125 real 0m11.453s 00:34:12.125 user 0m26.656s 00:34:12.125 sys 0m0.611s 00:34:12.125 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:12.125 10:51:52 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:34:12.125 ************************************ 00:34:12.125 END TEST fio_dif_1_multi_subsystems 00:34:12.125 ************************************ 00:34:12.125 10:51:52 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:34:12.125 10:51:52 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:12.125 10:51:52 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:12.125 10:51:52 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:12.385 ************************************ 00:34:12.385 START TEST fio_dif_rand_params 00:34:12.385 ************************************ 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1129 -- # fio_dif_rand_params 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:12.385 bdev_null0 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:12.385 [2024-11-20 10:51:52.900945] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:34:12.385 { 00:34:12.385 "params": { 00:34:12.385 "name": "Nvme$subsystem", 00:34:12.385 "trtype": "$TEST_TRANSPORT", 00:34:12.385 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:12.385 "adrfam": "ipv4", 00:34:12.385 "trsvcid": "$NVMF_PORT", 00:34:12.385 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:12.385 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:12.385 "hdgst": ${hdgst:-false}, 00:34:12.385 "ddgst": ${ddgst:-false} 00:34:12.385 }, 00:34:12.385 "method": "bdev_nvme_attach_controller" 00:34:12.385 } 00:34:12.385 EOF 00:34:12.385 )") 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:12.385 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:12.386 10:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:34:12.386 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:12.386 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.386 10:51:52 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:12.386 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:12.386 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:12.386 10:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:34:12.386 10:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:34:12.386 10:51:52 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:34:12.386 "params": { 00:34:12.386 "name": "Nvme0", 00:34:12.386 "trtype": "tcp", 00:34:12.386 "traddr": "10.0.0.2", 00:34:12.386 "adrfam": "ipv4", 00:34:12.386 "trsvcid": "4420", 00:34:12.386 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:12.386 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:12.386 "hdgst": false, 00:34:12.386 "ddgst": false 00:34:12.386 }, 00:34:12.386 "method": "bdev_nvme_attach_controller" 00:34:12.386 }' 00:34:12.386 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:12.386 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:12.386 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:12.386 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:12.386 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:12.386 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:12.386 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:12.386 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:12.386 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:12.386 10:51:52 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:12.645 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:12.645 ... 00:34:12.645 fio-3.35 00:34:12.645 Starting 3 threads 00:34:18.076 00:34:18.076 filename0: (groupid=0, jobs=1): err= 0: pid=3484281: Wed Nov 20 10:51:58 2024 00:34:18.076 read: IOPS=343, BW=42.9MiB/s (45.0MB/s)(215MiB/5003msec) 00:34:18.076 slat (nsec): min=5997, max=34147, avg=10341.44, stdev=2112.57 00:34:18.076 clat (usec): min=3014, max=50168, avg=8728.40, stdev=5205.03 00:34:18.076 lat (usec): min=3020, max=50179, avg=8738.74, stdev=5204.97 00:34:18.076 clat percentiles (usec): 00:34:18.076 | 1.00th=[ 3720], 5.00th=[ 4948], 10.00th=[ 5932], 20.00th=[ 7177], 00:34:18.076 | 30.00th=[ 7635], 40.00th=[ 7963], 50.00th=[ 8291], 60.00th=[ 8586], 00:34:18.076 | 70.00th=[ 8979], 80.00th=[ 9241], 90.00th=[ 9896], 95.00th=[10421], 00:34:18.076 | 99.00th=[44827], 99.50th=[47449], 99.90th=[50070], 99.95th=[50070], 00:34:18.076 | 99.99th=[50070] 00:34:18.076 bw ( KiB/s): min=28928, max=52992, per=37.23%, avg=43878.40, stdev=6628.81, samples=10 00:34:18.076 iops : min= 226, max= 414, avg=342.80, stdev=51.79, samples=10 00:34:18.076 lat (msec) : 4=2.91%, 10=88.82%, 20=6.52%, 50=1.63%, 100=0.12% 00:34:18.076 cpu : usr=93.94%, sys=5.78%, ctx=9, majf=0, minf=0 00:34:18.076 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:18.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.076 issued rwts: total=1717,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.076 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:18.076 filename0: (groupid=0, jobs=1): err= 0: pid=3484282: Wed Nov 20 10:51:58 2024 00:34:18.076 read: IOPS=298, BW=37.3MiB/s (39.1MB/s)(188MiB/5048msec) 00:34:18.076 slat (nsec): min=5971, max=25725, avg=10485.90, stdev=1862.52 00:34:18.076 clat (usec): min=3376, max=51983, avg=10008.18, stdev=6188.37 00:34:18.076 lat (usec): min=3382, max=51995, avg=10018.67, stdev=6188.33 00:34:18.076 clat percentiles (usec): 00:34:18.076 | 1.00th=[ 3589], 5.00th=[ 6128], 10.00th=[ 6849], 20.00th=[ 7963], 00:34:18.076 | 30.00th=[ 8455], 40.00th=[ 8848], 50.00th=[ 9241], 60.00th=[ 9634], 00:34:18.076 | 70.00th=[10028], 80.00th=[10421], 90.00th=[10945], 95.00th=[11600], 00:34:18.076 | 99.00th=[47449], 99.50th=[49021], 99.90th=[51119], 99.95th=[52167], 00:34:18.076 | 99.99th=[52167] 00:34:18.076 bw ( KiB/s): min=25856, max=41728, per=32.67%, avg=38502.40, stdev=4642.95, samples=10 00:34:18.076 iops : min= 202, max= 326, avg=300.80, stdev=36.27, samples=10 00:34:18.076 lat (msec) : 4=1.19%, 10=66.62%, 20=29.66%, 50=2.19%, 100=0.33% 00:34:18.076 cpu : usr=94.43%, sys=5.29%, ctx=11, majf=0, minf=2 00:34:18.076 IO depths : 1=1.1%, 2=98.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:18.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.076 issued rwts: total=1507,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.076 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:18.076 filename0: (groupid=0, jobs=1): err= 0: pid=3484283: Wed Nov 20 10:51:58 2024 00:34:18.076 read: IOPS=282, BW=35.3MiB/s (37.0MB/s)(178MiB/5049msec) 00:34:18.076 slat (nsec): min=6038, max=25711, avg=10459.47, stdev=1859.83 00:34:18.076 clat (usec): min=3409, max=54554, avg=10583.77, stdev=7184.55 00:34:18.076 lat (usec): min=3415, max=54561, avg=10594.23, stdev=7184.32 00:34:18.076 clat percentiles (usec): 00:34:18.076 | 1.00th=[ 3490], 5.00th=[ 6456], 10.00th=[ 7373], 20.00th=[ 8160], 00:34:18.076 | 30.00th=[ 8717], 40.00th=[ 9110], 50.00th=[ 9372], 60.00th=[ 9765], 00:34:18.076 | 70.00th=[10290], 80.00th=[10683], 90.00th=[11338], 95.00th=[12125], 00:34:18.076 | 99.00th=[49021], 99.50th=[49546], 99.90th=[51119], 99.95th=[54789], 00:34:18.076 | 99.99th=[54789] 00:34:18.076 bw ( KiB/s): min=13824, max=42496, per=30.91%, avg=36428.80, stdev=8877.17, samples=10 00:34:18.076 iops : min= 108, max= 332, avg=284.60, stdev=69.35, samples=10 00:34:18.076 lat (msec) : 4=1.26%, 10=63.30%, 20=31.93%, 50=3.23%, 100=0.28% 00:34:18.076 cpu : usr=94.57%, sys=5.15%, ctx=13, majf=0, minf=0 00:34:18.076 IO depths : 1=1.4%, 2=98.6%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:18.076 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.076 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:18.076 issued rwts: total=1425,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:18.076 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:18.076 00:34:18.076 Run status group 0 (all jobs): 00:34:18.076 READ: bw=115MiB/s (121MB/s), 35.3MiB/s-42.9MiB/s (37.0MB/s-45.0MB/s), io=581MiB (609MB), run=5003-5049msec 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.336 bdev_null0 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.336 10:51:58 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.336 [2024-11-20 10:51:59.014404] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.336 bdev_null1 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.336 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.596 bdev_null2 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:34:18.596 { 00:34:18.596 "params": { 00:34:18.596 "name": "Nvme$subsystem", 00:34:18.596 "trtype": "$TEST_TRANSPORT", 00:34:18.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:18.596 "adrfam": "ipv4", 00:34:18.596 "trsvcid": "$NVMF_PORT", 00:34:18.596 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:18.596 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:18.596 "hdgst": ${hdgst:-false}, 00:34:18.596 "ddgst": ${ddgst:-false} 00:34:18.596 }, 00:34:18.596 "method": "bdev_nvme_attach_controller" 00:34:18.596 } 00:34:18.596 EOF 00:34:18.596 )") 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:34:18.596 10:51:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:34:18.596 { 00:34:18.596 "params": { 00:34:18.596 "name": "Nvme$subsystem", 00:34:18.596 "trtype": "$TEST_TRANSPORT", 00:34:18.596 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:18.596 "adrfam": "ipv4", 00:34:18.597 "trsvcid": "$NVMF_PORT", 00:34:18.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:18.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:18.597 "hdgst": ${hdgst:-false}, 00:34:18.597 "ddgst": ${ddgst:-false} 00:34:18.597 }, 00:34:18.597 "method": "bdev_nvme_attach_controller" 00:34:18.597 } 00:34:18.597 EOF 00:34:18.597 )") 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:34:18.597 { 00:34:18.597 "params": { 00:34:18.597 "name": "Nvme$subsystem", 00:34:18.597 "trtype": "$TEST_TRANSPORT", 00:34:18.597 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:18.597 "adrfam": "ipv4", 00:34:18.597 "trsvcid": "$NVMF_PORT", 00:34:18.597 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:18.597 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:18.597 "hdgst": ${hdgst:-false}, 00:34:18.597 "ddgst": ${ddgst:-false} 00:34:18.597 }, 00:34:18.597 "method": "bdev_nvme_attach_controller" 00:34:18.597 } 00:34:18.597 EOF 00:34:18.597 )") 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:34:18.597 "params": { 00:34:18.597 "name": "Nvme0", 00:34:18.597 "trtype": "tcp", 00:34:18.597 "traddr": "10.0.0.2", 00:34:18.597 "adrfam": "ipv4", 00:34:18.597 "trsvcid": "4420", 00:34:18.597 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:18.597 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:18.597 "hdgst": false, 00:34:18.597 "ddgst": false 00:34:18.597 }, 00:34:18.597 "method": "bdev_nvme_attach_controller" 00:34:18.597 },{ 00:34:18.597 "params": { 00:34:18.597 "name": "Nvme1", 00:34:18.597 "trtype": "tcp", 00:34:18.597 "traddr": "10.0.0.2", 00:34:18.597 "adrfam": "ipv4", 00:34:18.597 "trsvcid": "4420", 00:34:18.597 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:18.597 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:18.597 "hdgst": false, 00:34:18.597 "ddgst": false 00:34:18.597 }, 00:34:18.597 "method": "bdev_nvme_attach_controller" 00:34:18.597 },{ 00:34:18.597 "params": { 00:34:18.597 "name": "Nvme2", 00:34:18.597 "trtype": "tcp", 00:34:18.597 "traddr": "10.0.0.2", 00:34:18.597 "adrfam": "ipv4", 00:34:18.597 "trsvcid": "4420", 00:34:18.597 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:34:18.597 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:34:18.597 "hdgst": false, 00:34:18.597 "ddgst": false 00:34:18.597 }, 00:34:18.597 "method": "bdev_nvme_attach_controller" 00:34:18.597 }' 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:18.597 10:51:59 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:18.856 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:18.856 ... 00:34:18.856 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:18.856 ... 00:34:18.856 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:34:18.856 ... 00:34:18.856 fio-3.35 00:34:18.856 Starting 24 threads 00:34:31.065 00:34:31.065 filename0: (groupid=0, jobs=1): err= 0: pid=3485485: Wed Nov 20 10:52:10 2024 00:34:31.065 read: IOPS=607, BW=2429KiB/s (2487kB/s)(23.8MiB/10012msec) 00:34:31.065 slat (nsec): min=7310, max=77043, avg=21241.46, stdev=13562.72 00:34:31.065 clat (usec): min=7460, max=34143, avg=26180.21, stdev=2292.17 00:34:31.065 lat (usec): min=7470, max=34158, avg=26201.45, stdev=2293.09 00:34:31.065 clat percentiles (usec): 00:34:31.065 | 1.00th=[16450], 5.00th=[23725], 10.00th=[24249], 20.00th=[24773], 00:34:31.065 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:34:31.065 | 70.00th=[26870], 80.00th=[27919], 90.00th=[28967], 95.00th=[29754], 00:34:31.065 | 99.00th=[30540], 99.50th=[30802], 99.90th=[32113], 99.95th=[33424], 00:34:31.065 | 99.99th=[34341] 00:34:31.065 bw ( KiB/s): min= 2176, max= 2816, per=4.19%, avg=2425.60, stdev=163.37, samples=20 00:34:31.065 iops : min= 544, max= 704, avg=606.40, stdev=40.84, samples=20 00:34:31.065 lat (msec) : 10=0.26%, 20=0.82%, 50=98.91% 00:34:31.065 cpu : usr=98.46%, sys=1.08%, ctx=137, majf=0, minf=58 00:34:31.065 IO depths : 1=5.6%, 2=11.8%, 4=25.0%, 8=50.7%, 16=6.9%, 32=0.0%, >=64=0.0% 00:34:31.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.065 complete : 0=0.0%, 4=94.2%, 8=0.0%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.065 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.065 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.065 filename0: (groupid=0, jobs=1): err= 0: pid=3485486: Wed Nov 20 10:52:10 2024 00:34:31.065 read: IOPS=600, BW=2403KiB/s (2460kB/s)(23.5MiB/10015msec) 00:34:31.065 slat (usec): min=5, max=112, avg=53.28, stdev=16.76 00:34:31.065 clat (usec): min=22535, max=59661, avg=26155.64, stdev=2560.87 00:34:31.065 lat (usec): min=22550, max=59726, avg=26208.93, stdev=2563.53 00:34:31.065 clat percentiles (usec): 00:34:31.065 | 1.00th=[23200], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:34:31.065 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[26084], 00:34:31.065 | 70.00th=[26608], 80.00th=[27395], 90.00th=[28705], 95.00th=[29230], 00:34:31.065 | 99.00th=[30278], 99.50th=[44303], 99.90th=[59507], 99.95th=[59507], 00:34:31.065 | 99.99th=[59507] 00:34:31.065 bw ( KiB/s): min= 2039, max= 2688, per=4.15%, avg=2399.55, stdev=125.07, samples=20 00:34:31.065 iops : min= 509, max= 672, avg=599.85, stdev=31.38, samples=20 00:34:31.065 lat (msec) : 50=99.73%, 100=0.27% 00:34:31.065 cpu : usr=98.49%, sys=1.00%, ctx=75, majf=0, minf=31 00:34:31.065 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.065 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.065 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.065 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.065 filename0: (groupid=0, jobs=1): err= 0: pid=3485487: Wed Nov 20 10:52:10 2024 00:34:31.065 read: IOPS=600, BW=2404KiB/s (2461kB/s)(23.5MiB/10012msec) 00:34:31.065 slat (usec): min=7, max=159, avg=56.09, stdev=19.04 00:34:31.065 clat (usec): min=22485, max=59455, avg=26112.90, stdev=2500.76 00:34:31.065 lat (usec): min=22502, max=59496, avg=26168.99, stdev=2504.96 00:34:31.065 clat percentiles (usec): 00:34:31.065 | 1.00th=[23200], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:34:31.065 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[26084], 00:34:31.065 | 70.00th=[26608], 80.00th=[27395], 90.00th=[28705], 95.00th=[29230], 00:34:31.065 | 99.00th=[30278], 99.50th=[41157], 99.90th=[58983], 99.95th=[58983], 00:34:31.065 | 99.99th=[59507] 00:34:31.065 bw ( KiB/s): min= 2048, max= 2688, per=4.15%, avg=2400.20, stdev=123.78, samples=20 00:34:31.065 iops : min= 512, max= 672, avg=600.05, stdev=30.94, samples=20 00:34:31.065 lat (msec) : 50=99.73%, 100=0.27% 00:34:31.065 cpu : usr=98.93%, sys=0.68%, ctx=15, majf=0, minf=25 00:34:31.065 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.065 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.065 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.065 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.065 filename0: (groupid=0, jobs=1): err= 0: pid=3485488: Wed Nov 20 10:52:10 2024 00:34:31.065 read: IOPS=601, BW=2406KiB/s (2464kB/s)(23.6MiB/10029msec) 00:34:31.065 slat (usec): min=6, max=152, avg=40.93, stdev=23.06 00:34:31.065 clat (usec): min=19359, max=59276, avg=26229.52, stdev=2384.79 00:34:31.065 lat (usec): min=19373, max=59309, avg=26270.45, stdev=2391.07 00:34:31.065 clat percentiles (usec): 00:34:31.065 | 1.00th=[23462], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:34:31.065 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26346], 00:34:31.065 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28705], 95.00th=[29492], 00:34:31.065 | 99.00th=[30540], 99.50th=[31065], 99.90th=[58983], 99.95th=[58983], 00:34:31.065 | 99.99th=[59507] 00:34:31.065 bw ( KiB/s): min= 2048, max= 2688, per=4.16%, avg=2406.40, stdev=146.77, samples=20 00:34:31.065 iops : min= 512, max= 672, avg=601.60, stdev=36.69, samples=20 00:34:31.065 lat (msec) : 20=0.10%, 50=99.64%, 100=0.27% 00:34:31.065 cpu : usr=98.56%, sys=0.99%, ctx=53, majf=0, minf=29 00:34:31.065 IO depths : 1=5.8%, 2=12.0%, 4=24.9%, 8=50.5%, 16=6.7%, 32=0.0%, >=64=0.0% 00:34:31.065 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.065 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.065 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.065 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.065 filename0: (groupid=0, jobs=1): err= 0: pid=3485489: Wed Nov 20 10:52:10 2024 00:34:31.065 read: IOPS=600, BW=2404KiB/s (2461kB/s)(23.5MiB/10012msec) 00:34:31.065 slat (usec): min=7, max=122, avg=53.80, stdev=18.08 00:34:31.065 clat (usec): min=22538, max=59658, avg=26146.31, stdev=2501.92 00:34:31.065 lat (usec): min=22560, max=59728, avg=26200.11, stdev=2505.36 00:34:31.065 clat percentiles (usec): 00:34:31.065 | 1.00th=[23200], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:34:31.065 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[26084], 00:34:31.065 | 70.00th=[26608], 80.00th=[27395], 90.00th=[28705], 95.00th=[29230], 00:34:31.065 | 99.00th=[30278], 99.50th=[41681], 99.90th=[58983], 99.95th=[59507], 00:34:31.065 | 99.99th=[59507] 00:34:31.065 bw ( KiB/s): min= 2048, max= 2688, per=4.15%, avg=2400.20, stdev=123.78, samples=20 00:34:31.065 iops : min= 512, max= 672, avg=600.05, stdev=30.94, samples=20 00:34:31.065 lat (msec) : 50=99.73%, 100=0.27% 00:34:31.066 cpu : usr=98.44%, sys=1.12%, ctx=43, majf=0, minf=26 00:34:31.066 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.066 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.066 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.066 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.066 filename0: (groupid=0, jobs=1): err= 0: pid=3485490: Wed Nov 20 10:52:10 2024 00:34:31.066 read: IOPS=605, BW=2421KiB/s (2479kB/s)(23.8MiB/10046msec) 00:34:31.066 slat (usec): min=7, max=113, avg=37.74, stdev=19.99 00:34:31.066 clat (usec): min=10088, max=58987, avg=26154.38, stdev=2738.96 00:34:31.066 lat (usec): min=10103, max=59003, avg=26192.12, stdev=2739.48 00:34:31.066 clat percentiles (usec): 00:34:31.066 | 1.00th=[16909], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:34:31.066 | 30.00th=[25035], 40.00th=[25297], 50.00th=[26084], 60.00th=[26346], 00:34:31.066 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[29492], 00:34:31.066 | 99.00th=[30278], 99.50th=[31327], 99.90th=[58983], 99.95th=[58983], 00:34:31.066 | 99.99th=[58983] 00:34:31.066 bw ( KiB/s): min= 2176, max= 2816, per=4.19%, avg=2425.60, stdev=163.37, samples=20 00:34:31.066 iops : min= 544, max= 704, avg=606.40, stdev=40.84, samples=20 00:34:31.066 lat (msec) : 20=1.05%, 50=98.68%, 100=0.26% 00:34:31.066 cpu : usr=98.35%, sys=1.02%, ctx=117, majf=0, minf=30 00:34:31.066 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:31.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.066 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.066 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.066 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.066 filename0: (groupid=0, jobs=1): err= 0: pid=3485492: Wed Nov 20 10:52:10 2024 00:34:31.066 read: IOPS=601, BW=2407KiB/s (2465kB/s)(23.6MiB/10023msec) 00:34:31.066 slat (usec): min=7, max=115, avg=50.42, stdev=19.04 00:34:31.066 clat (usec): min=22504, max=59446, avg=26181.90, stdev=2384.06 00:34:31.066 lat (usec): min=22526, max=59493, avg=26232.32, stdev=2386.44 00:34:31.066 clat percentiles (usec): 00:34:31.066 | 1.00th=[23200], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:34:31.066 | 30.00th=[25035], 40.00th=[25560], 50.00th=[25822], 60.00th=[26346], 00:34:31.066 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28705], 95.00th=[29492], 00:34:31.066 | 99.00th=[30278], 99.50th=[31065], 99.90th=[58983], 99.95th=[59507], 00:34:31.066 | 99.99th=[59507] 00:34:31.066 bw ( KiB/s): min= 2007, max= 2688, per=4.16%, avg=2404.35, stdev=158.39, samples=20 00:34:31.066 iops : min= 501, max= 672, avg=601.05, stdev=39.70, samples=20 00:34:31.066 lat (msec) : 50=99.73%, 100=0.27% 00:34:31.066 cpu : usr=97.80%, sys=1.45%, ctx=123, majf=0, minf=27 00:34:31.066 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.066 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.066 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.066 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.066 filename0: (groupid=0, jobs=1): err= 0: pid=3485493: Wed Nov 20 10:52:10 2024 00:34:31.066 read: IOPS=603, BW=2414KiB/s (2472kB/s)(23.7MiB/10046msec) 00:34:31.066 slat (usec): min=6, max=136, avg=23.15, stdev=15.91 00:34:31.066 clat (usec): min=10250, max=58511, avg=26299.19, stdev=2503.74 00:34:31.066 lat (usec): min=10260, max=58536, avg=26322.33, stdev=2503.34 00:34:31.066 clat percentiles (usec): 00:34:31.066 | 1.00th=[23200], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:34:31.066 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:34:31.066 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:34:31.066 | 99.00th=[30278], 99.50th=[30540], 99.90th=[58459], 99.95th=[58459], 00:34:31.066 | 99.99th=[58459] 00:34:31.066 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2419.20, stdev=143.26, samples=20 00:34:31.066 iops : min= 544, max= 672, avg=604.80, stdev=35.81, samples=20 00:34:31.066 lat (msec) : 20=0.53%, 50=99.21%, 100=0.26% 00:34:31.066 cpu : usr=98.19%, sys=1.14%, ctx=134, majf=0, minf=31 00:34:31.066 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:31.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.066 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.066 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.066 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.066 filename1: (groupid=0, jobs=1): err= 0: pid=3485494: Wed Nov 20 10:52:10 2024 00:34:31.066 read: IOPS=600, BW=2403KiB/s (2461kB/s)(23.5MiB/10013msec) 00:34:31.066 slat (usec): min=6, max=115, avg=52.52, stdev=17.95 00:34:31.066 clat (usec): min=22551, max=59582, avg=26192.17, stdev=2523.63 00:34:31.066 lat (usec): min=22567, max=59630, avg=26244.69, stdev=2525.86 00:34:31.066 clat percentiles (usec): 00:34:31.066 | 1.00th=[23200], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:34:31.066 | 30.00th=[25035], 40.00th=[25560], 50.00th=[25822], 60.00th=[26346], 00:34:31.066 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28705], 95.00th=[29492], 00:34:31.066 | 99.00th=[30540], 99.50th=[42730], 99.90th=[58983], 99.95th=[59507], 00:34:31.066 | 99.99th=[59507] 00:34:31.066 bw ( KiB/s): min= 2048, max= 2688, per=4.15%, avg=2400.00, stdev=123.72, samples=20 00:34:31.066 iops : min= 512, max= 672, avg=600.00, stdev=30.93, samples=20 00:34:31.066 lat (msec) : 50=99.73%, 100=0.27% 00:34:31.066 cpu : usr=97.82%, sys=1.47%, ctx=113, majf=0, minf=31 00:34:31.066 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.066 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.066 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.066 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.066 filename1: (groupid=0, jobs=1): err= 0: pid=3485495: Wed Nov 20 10:52:10 2024 00:34:31.066 read: IOPS=605, BW=2421KiB/s (2479kB/s)(23.8MiB/10046msec) 00:34:31.066 slat (usec): min=7, max=111, avg=27.91, stdev=17.77 00:34:31.066 clat (usec): min=10082, max=59196, avg=26228.61, stdev=2745.82 00:34:31.066 lat (usec): min=10101, max=59213, avg=26256.53, stdev=2746.35 00:34:31.066 clat percentiles (usec): 00:34:31.066 | 1.00th=[17171], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:34:31.066 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:34:31.066 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:34:31.066 | 99.00th=[30278], 99.50th=[31589], 99.90th=[58983], 99.95th=[58983], 00:34:31.066 | 99.99th=[58983] 00:34:31.066 bw ( KiB/s): min= 2176, max= 2816, per=4.19%, avg=2425.60, stdev=163.37, samples=20 00:34:31.066 iops : min= 544, max= 704, avg=606.40, stdev=40.84, samples=20 00:34:31.066 lat (msec) : 20=1.05%, 50=98.68%, 100=0.26% 00:34:31.066 cpu : usr=98.61%, sys=1.04%, ctx=11, majf=0, minf=47 00:34:31.066 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:31.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.066 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.066 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.066 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.066 filename1: (groupid=0, jobs=1): err= 0: pid=3485496: Wed Nov 20 10:52:10 2024 00:34:31.066 read: IOPS=605, BW=2421KiB/s (2479kB/s)(23.8MiB/10045msec) 00:34:31.066 slat (nsec): min=7265, max=84268, avg=14843.03, stdev=7987.28 00:34:31.066 clat (usec): min=10157, max=58502, avg=26311.96, stdev=2733.69 00:34:31.066 lat (usec): min=10170, max=58523, avg=26326.80, stdev=2732.64 00:34:31.066 clat percentiles (usec): 00:34:31.066 | 1.00th=[17171], 5.00th=[23987], 10.00th=[24249], 20.00th=[25035], 00:34:31.066 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:34:31.066 | 70.00th=[26870], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:34:31.066 | 99.00th=[30540], 99.50th=[31589], 99.90th=[58459], 99.95th=[58459], 00:34:31.066 | 99.99th=[58459] 00:34:31.066 bw ( KiB/s): min= 2176, max= 2816, per=4.19%, avg=2425.60, stdev=163.37, samples=20 00:34:31.066 iops : min= 544, max= 704, avg=606.40, stdev=40.84, samples=20 00:34:31.066 lat (msec) : 20=1.09%, 50=98.65%, 100=0.26% 00:34:31.066 cpu : usr=98.31%, sys=1.18%, ctx=83, majf=0, minf=40 00:34:31.066 IO depths : 1=6.2%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:31.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.066 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.066 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.066 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.066 filename1: (groupid=0, jobs=1): err= 0: pid=3485497: Wed Nov 20 10:52:10 2024 00:34:31.066 read: IOPS=622, BW=2490KiB/s (2550kB/s)(24.3MiB/10012msec) 00:34:31.066 slat (usec): min=6, max=120, avg=37.93, stdev=25.79 00:34:31.066 clat (usec): min=6654, max=63477, avg=25439.84, stdev=4126.69 00:34:31.066 lat (usec): min=6668, max=63516, avg=25477.77, stdev=4135.73 00:34:31.066 clat percentiles (usec): 00:34:31.066 | 1.00th=[13829], 5.00th=[16909], 10.00th=[22152], 20.00th=[24249], 00:34:31.066 | 30.00th=[25035], 40.00th=[25035], 50.00th=[25822], 60.00th=[26084], 00:34:31.066 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[29754], 00:34:31.066 | 99.00th=[33817], 99.50th=[46400], 99.90th=[59507], 99.95th=[59507], 00:34:31.066 | 99.99th=[63701] 00:34:31.066 bw ( KiB/s): min= 2304, max= 2944, per=4.31%, avg=2495.68, stdev=190.83, samples=19 00:34:31.066 iops : min= 576, max= 736, avg=623.89, stdev=47.70, samples=19 00:34:31.066 lat (msec) : 10=0.39%, 20=7.72%, 50=91.61%, 100=0.29% 00:34:31.066 cpu : usr=96.74%, sys=1.90%, ctx=499, majf=0, minf=44 00:34:31.066 IO depths : 1=0.1%, 2=3.7%, 4=15.2%, 8=66.7%, 16=14.2%, 32=0.0%, >=64=0.0% 00:34:31.066 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.066 complete : 0=0.0%, 4=92.2%, 8=4.0%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.067 issued rwts: total=6232,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.067 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.067 filename1: (groupid=0, jobs=1): err= 0: pid=3485498: Wed Nov 20 10:52:10 2024 00:34:31.067 read: IOPS=601, BW=2407KiB/s (2465kB/s)(23.6MiB/10023msec) 00:34:31.067 slat (usec): min=5, max=114, avg=50.10, stdev=18.85 00:34:31.067 clat (usec): min=22531, max=59368, avg=26178.99, stdev=2377.36 00:34:31.067 lat (usec): min=22551, max=59405, avg=26229.08, stdev=2379.94 00:34:31.067 clat percentiles (usec): 00:34:31.067 | 1.00th=[23200], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:34:31.067 | 30.00th=[25035], 40.00th=[25560], 50.00th=[25822], 60.00th=[26346], 00:34:31.067 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28705], 95.00th=[29492], 00:34:31.067 | 99.00th=[30278], 99.50th=[31065], 99.90th=[58983], 99.95th=[58983], 00:34:31.067 | 99.99th=[59507] 00:34:31.067 bw ( KiB/s): min= 2007, max= 2688, per=4.16%, avg=2404.35, stdev=158.39, samples=20 00:34:31.067 iops : min= 501, max= 672, avg=601.05, stdev=39.70, samples=20 00:34:31.067 lat (msec) : 50=99.73%, 100=0.27% 00:34:31.067 cpu : usr=98.63%, sys=1.02%, ctx=18, majf=0, minf=33 00:34:31.067 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.067 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.067 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.067 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.067 filename1: (groupid=0, jobs=1): err= 0: pid=3485499: Wed Nov 20 10:52:10 2024 00:34:31.067 read: IOPS=600, BW=2403KiB/s (2461kB/s)(23.5MiB/10013msec) 00:34:31.067 slat (usec): min=6, max=117, avg=53.41, stdev=16.71 00:34:31.067 clat (usec): min=22534, max=59689, avg=26167.39, stdev=2530.02 00:34:31.067 lat (usec): min=22556, max=59735, avg=26220.80, stdev=2532.63 00:34:31.067 clat percentiles (usec): 00:34:31.067 | 1.00th=[23200], 5.00th=[23987], 10.00th=[24249], 20.00th=[24511], 00:34:31.067 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[26346], 00:34:31.067 | 70.00th=[26608], 80.00th=[27395], 90.00th=[28705], 95.00th=[29492], 00:34:31.067 | 99.00th=[30278], 99.50th=[42730], 99.90th=[59507], 99.95th=[59507], 00:34:31.067 | 99.99th=[59507] 00:34:31.067 bw ( KiB/s): min= 2048, max= 2688, per=4.15%, avg=2400.00, stdev=123.72, samples=20 00:34:31.067 iops : min= 512, max= 672, avg=600.00, stdev=30.93, samples=20 00:34:31.067 lat (msec) : 50=99.73%, 100=0.27% 00:34:31.067 cpu : usr=98.14%, sys=1.29%, ctx=58, majf=0, minf=36 00:34:31.067 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.067 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.067 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.067 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.067 filename1: (groupid=0, jobs=1): err= 0: pid=3485500: Wed Nov 20 10:52:10 2024 00:34:31.067 read: IOPS=603, BW=2414KiB/s (2472kB/s)(23.7MiB/10046msec) 00:34:31.067 slat (nsec): min=6549, max=81385, avg=24690.64, stdev=16822.87 00:34:31.067 clat (usec): min=8589, max=58463, avg=26277.74, stdev=2495.14 00:34:31.067 lat (usec): min=8607, max=58502, avg=26302.43, stdev=2495.67 00:34:31.067 clat percentiles (usec): 00:34:31.067 | 1.00th=[23462], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:34:31.067 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26346], 60.00th=[26608], 00:34:31.067 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[29492], 00:34:31.067 | 99.00th=[30278], 99.50th=[30540], 99.90th=[58459], 99.95th=[58459], 00:34:31.067 | 99.99th=[58459] 00:34:31.067 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2419.20, stdev=143.26, samples=20 00:34:31.067 iops : min= 544, max= 672, avg=604.80, stdev=35.81, samples=20 00:34:31.067 lat (msec) : 10=0.03%, 20=0.49%, 50=99.21%, 100=0.26% 00:34:31.067 cpu : usr=98.43%, sys=1.14%, ctx=39, majf=0, minf=25 00:34:31.067 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:31.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.067 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.067 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.067 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.067 filename1: (groupid=0, jobs=1): err= 0: pid=3485503: Wed Nov 20 10:52:10 2024 00:34:31.067 read: IOPS=601, BW=2406KiB/s (2464kB/s)(23.6MiB/10027msec) 00:34:31.067 slat (usec): min=6, max=134, avg=39.28, stdev=21.90 00:34:31.067 clat (usec): min=19361, max=59358, avg=26257.49, stdev=2408.56 00:34:31.067 lat (usec): min=19377, max=59389, avg=26296.77, stdev=2412.38 00:34:31.067 clat percentiles (usec): 00:34:31.067 | 1.00th=[23462], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:34:31.067 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26346], 00:34:31.067 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[29492], 00:34:31.067 | 99.00th=[30540], 99.50th=[31327], 99.90th=[58983], 99.95th=[58983], 00:34:31.067 | 99.99th=[59507] 00:34:31.067 bw ( KiB/s): min= 2048, max= 2688, per=4.16%, avg=2406.65, stdev=147.69, samples=20 00:34:31.067 iops : min= 512, max= 672, avg=601.65, stdev=36.91, samples=20 00:34:31.067 lat (msec) : 20=0.07%, 50=99.67%, 100=0.27% 00:34:31.067 cpu : usr=98.13%, sys=1.26%, ctx=86, majf=0, minf=33 00:34:31.067 IO depths : 1=5.9%, 2=12.1%, 4=24.9%, 8=50.4%, 16=6.6%, 32=0.0%, >=64=0.0% 00:34:31.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.067 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.067 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.067 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.067 filename2: (groupid=0, jobs=1): err= 0: pid=3485504: Wed Nov 20 10:52:10 2024 00:34:31.067 read: IOPS=603, BW=2415KiB/s (2473kB/s)(23.7MiB/10045msec) 00:34:31.067 slat (usec): min=6, max=143, avg=38.69, stdev=22.47 00:34:31.067 clat (usec): min=10082, max=59215, avg=26237.16, stdev=2635.53 00:34:31.067 lat (usec): min=10096, max=59239, avg=26275.86, stdev=2638.70 00:34:31.067 clat percentiles (usec): 00:34:31.067 | 1.00th=[20579], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:34:31.067 | 30.00th=[25035], 40.00th=[25297], 50.00th=[26084], 60.00th=[26346], 00:34:31.067 | 70.00th=[26608], 80.00th=[27919], 90.00th=[28967], 95.00th=[29754], 00:34:31.067 | 99.00th=[30540], 99.50th=[31851], 99.90th=[58983], 99.95th=[58983], 00:34:31.067 | 99.99th=[58983] 00:34:31.067 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2419.20, stdev=149.16, samples=20 00:34:31.067 iops : min= 544, max= 672, avg=604.80, stdev=37.29, samples=20 00:34:31.067 lat (msec) : 20=0.99%, 50=98.75%, 100=0.26% 00:34:31.067 cpu : usr=98.49%, sys=1.06%, ctx=55, majf=0, minf=35 00:34:31.067 IO depths : 1=1.9%, 2=8.2%, 4=25.0%, 8=54.3%, 16=10.6%, 32=0.0%, >=64=0.0% 00:34:31.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.067 complete : 0=0.0%, 4=94.3%, 8=0.0%, 16=5.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.067 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.067 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.067 filename2: (groupid=0, jobs=1): err= 0: pid=3485505: Wed Nov 20 10:52:10 2024 00:34:31.067 read: IOPS=607, BW=2429KiB/s (2487kB/s)(23.8MiB/10012msec) 00:34:31.067 slat (nsec): min=6187, max=96556, avg=28452.35, stdev=17660.31 00:34:31.067 clat (usec): min=10194, max=31765, avg=26086.90, stdev=2181.27 00:34:31.067 lat (usec): min=10225, max=31793, avg=26115.35, stdev=2182.84 00:34:31.067 clat percentiles (usec): 00:34:31.067 | 1.00th=[16909], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:34:31.067 | 30.00th=[25035], 40.00th=[25297], 50.00th=[26084], 60.00th=[26346], 00:34:31.067 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[29492], 00:34:31.067 | 99.00th=[30278], 99.50th=[30540], 99.90th=[31327], 99.95th=[31589], 00:34:31.067 | 99.99th=[31851] 00:34:31.067 bw ( KiB/s): min= 2176, max= 2816, per=4.19%, avg=2425.60, stdev=163.37, samples=20 00:34:31.067 iops : min= 544, max= 704, avg=606.40, stdev=40.84, samples=20 00:34:31.067 lat (msec) : 20=1.09%, 50=98.91% 00:34:31.067 cpu : usr=98.75%, sys=0.88%, ctx=15, majf=0, minf=29 00:34:31.067 IO depths : 1=6.0%, 2=12.2%, 4=25.0%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:31.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.067 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.067 issued rwts: total=6080,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.067 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.067 filename2: (groupid=0, jobs=1): err= 0: pid=3485506: Wed Nov 20 10:52:10 2024 00:34:31.067 read: IOPS=603, BW=2415KiB/s (2473kB/s)(23.7MiB/10044msec) 00:34:31.067 slat (usec): min=5, max=148, avg=47.84, stdev=21.77 00:34:31.067 clat (usec): min=10092, max=59224, avg=26117.47, stdev=2537.23 00:34:31.067 lat (usec): min=10107, max=59253, avg=26165.32, stdev=2541.30 00:34:31.067 clat percentiles (usec): 00:34:31.067 | 1.00th=[23200], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:34:31.067 | 30.00th=[25035], 40.00th=[25297], 50.00th=[25822], 60.00th=[26346], 00:34:31.067 | 70.00th=[26608], 80.00th=[27395], 90.00th=[28705], 95.00th=[29492], 00:34:31.067 | 99.00th=[30278], 99.50th=[30540], 99.90th=[58983], 99.95th=[58983], 00:34:31.067 | 99.99th=[58983] 00:34:31.067 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2419.20, stdev=149.16, samples=20 00:34:31.067 iops : min= 544, max= 672, avg=604.80, stdev=37.29, samples=20 00:34:31.067 lat (msec) : 20=0.53%, 50=99.21%, 100=0.26% 00:34:31.067 cpu : usr=98.51%, sys=0.94%, ctx=56, majf=0, minf=26 00:34:31.067 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.067 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.067 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.067 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.067 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.067 filename2: (groupid=0, jobs=1): err= 0: pid=3485507: Wed Nov 20 10:52:10 2024 00:34:31.068 read: IOPS=601, BW=2406KiB/s (2464kB/s)(23.6MiB/10028msec) 00:34:31.068 slat (usec): min=3, max=119, avg=38.25, stdev=20.83 00:34:31.068 clat (usec): min=21360, max=59556, avg=26271.93, stdev=2421.77 00:34:31.068 lat (usec): min=21371, max=59589, avg=26310.19, stdev=2426.79 00:34:31.068 clat percentiles (usec): 00:34:31.068 | 1.00th=[22938], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:34:31.068 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[26346], 00:34:31.068 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[29492], 00:34:31.068 | 99.00th=[30802], 99.50th=[31327], 99.90th=[58983], 99.95th=[59507], 00:34:31.068 | 99.99th=[59507] 00:34:31.068 bw ( KiB/s): min= 2048, max= 2688, per=4.16%, avg=2406.40, stdev=147.41, samples=20 00:34:31.068 iops : min= 512, max= 672, avg=601.50, stdev=36.84, samples=20 00:34:31.068 lat (msec) : 50=99.73%, 100=0.27% 00:34:31.068 cpu : usr=98.18%, sys=1.28%, ctx=106, majf=0, minf=32 00:34:31.068 IO depths : 1=4.9%, 2=11.1%, 4=24.9%, 8=51.5%, 16=7.6%, 32=0.0%, >=64=0.0% 00:34:31.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.068 complete : 0=0.0%, 4=94.2%, 8=0.1%, 16=5.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.068 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.068 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.068 filename2: (groupid=0, jobs=1): err= 0: pid=3485508: Wed Nov 20 10:52:10 2024 00:34:31.068 read: IOPS=601, BW=2406KiB/s (2464kB/s)(23.6MiB/10028msec) 00:34:31.068 slat (usec): min=6, max=147, avg=45.52, stdev=23.45 00:34:31.068 clat (usec): min=20828, max=59290, avg=26194.20, stdev=2382.28 00:34:31.068 lat (usec): min=20838, max=59329, avg=26239.71, stdev=2389.80 00:34:31.068 clat percentiles (usec): 00:34:31.068 | 1.00th=[23462], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:34:31.068 | 30.00th=[25035], 40.00th=[25560], 50.00th=[25822], 60.00th=[26346], 00:34:31.068 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28705], 95.00th=[29492], 00:34:31.068 | 99.00th=[30540], 99.50th=[32375], 99.90th=[58983], 99.95th=[58983], 00:34:31.068 | 99.99th=[59507] 00:34:31.068 bw ( KiB/s): min= 2048, max= 2688, per=4.16%, avg=2406.65, stdev=147.69, samples=20 00:34:31.068 iops : min= 512, max= 672, avg=601.65, stdev=36.91, samples=20 00:34:31.068 lat (msec) : 50=99.73%, 100=0.27% 00:34:31.068 cpu : usr=99.03%, sys=0.59%, ctx=31, majf=0, minf=28 00:34:31.068 IO depths : 1=6.0%, 2=12.2%, 4=24.9%, 8=50.3%, 16=6.5%, 32=0.0%, >=64=0.0% 00:34:31.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.068 complete : 0=0.0%, 4=94.1%, 8=0.1%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.068 issued rwts: total=6032,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.068 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.068 filename2: (groupid=0, jobs=1): err= 0: pid=3485509: Wed Nov 20 10:52:10 2024 00:34:31.068 read: IOPS=603, BW=2415KiB/s (2473kB/s)(23.7MiB/10044msec) 00:34:31.068 slat (usec): min=6, max=117, avg=26.58, stdev=19.15 00:34:31.068 clat (usec): min=10269, max=58477, avg=26228.58, stdev=2501.20 00:34:31.068 lat (usec): min=10277, max=58504, avg=26255.16, stdev=2501.96 00:34:31.068 clat percentiles (usec): 00:34:31.068 | 1.00th=[22938], 5.00th=[23987], 10.00th=[24249], 20.00th=[24773], 00:34:31.068 | 30.00th=[25035], 40.00th=[25297], 50.00th=[26346], 60.00th=[26346], 00:34:31.068 | 70.00th=[26608], 80.00th=[27657], 90.00th=[28967], 95.00th=[29492], 00:34:31.068 | 99.00th=[30278], 99.50th=[31065], 99.90th=[58459], 99.95th=[58459], 00:34:31.068 | 99.99th=[58459] 00:34:31.068 bw ( KiB/s): min= 2176, max= 2688, per=4.18%, avg=2419.20, stdev=143.26, samples=20 00:34:31.068 iops : min= 544, max= 672, avg=604.80, stdev=35.81, samples=20 00:34:31.068 lat (msec) : 20=0.53%, 50=99.21%, 100=0.26% 00:34:31.068 cpu : usr=98.32%, sys=1.11%, ctx=99, majf=0, minf=31 00:34:31.068 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.3%, 32=0.0%, >=64=0.0% 00:34:31.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.068 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.068 issued rwts: total=6064,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.068 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.068 filename2: (groupid=0, jobs=1): err= 0: pid=3485510: Wed Nov 20 10:52:10 2024 00:34:31.068 read: IOPS=600, BW=2404KiB/s (2461kB/s)(23.5MiB/10012msec) 00:34:31.068 slat (usec): min=7, max=159, avg=55.98, stdev=17.72 00:34:31.068 clat (usec): min=19419, max=59344, avg=26123.77, stdev=2519.71 00:34:31.068 lat (usec): min=19432, max=59469, avg=26179.75, stdev=2523.55 00:34:31.068 clat percentiles (usec): 00:34:31.068 | 1.00th=[23200], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:34:31.068 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[26084], 00:34:31.068 | 70.00th=[26608], 80.00th=[27395], 90.00th=[28705], 95.00th=[29230], 00:34:31.068 | 99.00th=[30540], 99.50th=[41681], 99.90th=[58983], 99.95th=[58983], 00:34:31.068 | 99.99th=[59507] 00:34:31.068 bw ( KiB/s): min= 2048, max= 2688, per=4.15%, avg=2400.20, stdev=123.78, samples=20 00:34:31.068 iops : min= 512, max= 672, avg=600.05, stdev=30.94, samples=20 00:34:31.068 lat (msec) : 20=0.10%, 50=99.63%, 100=0.27% 00:34:31.068 cpu : usr=98.91%, sys=0.66%, ctx=60, majf=0, minf=30 00:34:31.068 IO depths : 1=6.1%, 2=12.4%, 4=25.0%, 8=50.1%, 16=6.4%, 32=0.0%, >=64=0.0% 00:34:31.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.068 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.068 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.068 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.068 filename2: (groupid=0, jobs=1): err= 0: pid=3485511: Wed Nov 20 10:52:10 2024 00:34:31.068 read: IOPS=600, BW=2404KiB/s (2461kB/s)(23.5MiB/10012msec) 00:34:31.068 slat (usec): min=7, max=127, avg=54.43, stdev=19.44 00:34:31.068 clat (usec): min=22479, max=59660, avg=26146.47, stdev=2506.94 00:34:31.068 lat (usec): min=22499, max=59747, avg=26200.89, stdev=2510.27 00:34:31.068 clat percentiles (usec): 00:34:31.068 | 1.00th=[23200], 5.00th=[23725], 10.00th=[24249], 20.00th=[24511], 00:34:31.068 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[26084], 00:34:31.068 | 70.00th=[26608], 80.00th=[27395], 90.00th=[28705], 95.00th=[29230], 00:34:31.068 | 99.00th=[30278], 99.50th=[41681], 99.90th=[58983], 99.95th=[59507], 00:34:31.068 | 99.99th=[59507] 00:34:31.068 bw ( KiB/s): min= 2048, max= 2688, per=4.15%, avg=2400.20, stdev=123.78, samples=20 00:34:31.068 iops : min= 512, max= 672, avg=600.05, stdev=30.94, samples=20 00:34:31.068 lat (msec) : 50=99.73%, 100=0.27% 00:34:31.068 cpu : usr=98.39%, sys=1.08%, ctx=124, majf=0, minf=25 00:34:31.068 IO depths : 1=6.2%, 2=12.5%, 4=25.0%, 8=50.0%, 16=6.2%, 32=0.0%, >=64=0.0% 00:34:31.068 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.068 complete : 0=0.0%, 4=94.1%, 8=0.0%, 16=5.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:31.068 issued rwts: total=6016,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:31.068 latency : target=0, window=0, percentile=100.00%, depth=16 00:34:31.068 00:34:31.068 Run status group 0 (all jobs): 00:34:31.068 READ: bw=56.5MiB/s (59.2MB/s), 2403KiB/s-2490KiB/s (2460kB/s-2550kB/s), io=567MiB (595MB), run=10012-10046msec 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:34:31.068 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.069 bdev_null0 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.069 [2024-11-20 10:52:10.699443] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.069 bdev_null1 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # config=() 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@372 -- # local subsystem config 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:34:31.069 { 00:34:31.069 "params": { 00:34:31.069 "name": "Nvme$subsystem", 00:34:31.069 "trtype": "$TEST_TRANSPORT", 00:34:31.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:31.069 "adrfam": "ipv4", 00:34:31.069 "trsvcid": "$NVMF_PORT", 00:34:31.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:31.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:31.069 "hdgst": ${hdgst:-false}, 00:34:31.069 "ddgst": ${ddgst:-false} 00:34:31.069 }, 00:34:31.069 "method": "bdev_nvme_attach_controller" 00:34:31.069 } 00:34:31.069 EOF 00:34:31.069 )") 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # shift 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libasan 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:34:31.069 { 00:34:31.069 "params": { 00:34:31.069 "name": "Nvme$subsystem", 00:34:31.069 "trtype": "$TEST_TRANSPORT", 00:34:31.069 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:31.069 "adrfam": "ipv4", 00:34:31.069 "trsvcid": "$NVMF_PORT", 00:34:31.069 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:31.069 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:31.069 "hdgst": ${hdgst:-false}, 00:34:31.069 "ddgst": ${ddgst:-false} 00:34:31.069 }, 00:34:31.069 "method": "bdev_nvme_attach_controller" 00:34:31.069 } 00:34:31.069 EOF 00:34:31.069 )") 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@394 -- # cat 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@396 -- # jq . 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@397 -- # IFS=, 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:34:31.069 "params": { 00:34:31.069 "name": "Nvme0", 00:34:31.069 "trtype": "tcp", 00:34:31.069 "traddr": "10.0.0.2", 00:34:31.069 "adrfam": "ipv4", 00:34:31.069 "trsvcid": "4420", 00:34:31.069 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:31.069 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:31.069 "hdgst": false, 00:34:31.069 "ddgst": false 00:34:31.069 }, 00:34:31.069 "method": "bdev_nvme_attach_controller" 00:34:31.069 },{ 00:34:31.069 "params": { 00:34:31.069 "name": "Nvme1", 00:34:31.069 "trtype": "tcp", 00:34:31.069 "traddr": "10.0.0.2", 00:34:31.069 "adrfam": "ipv4", 00:34:31.069 "trsvcid": "4420", 00:34:31.069 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:34:31.069 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:34:31.069 "hdgst": false, 00:34:31.069 "ddgst": false 00:34:31.069 }, 00:34:31.069 "method": "bdev_nvme_attach_controller" 00:34:31.069 }' 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:31.069 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:31.070 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:31.070 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:31.070 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:31.070 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:31.070 10:52:10 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:31.070 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:31.070 ... 00:34:31.070 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:34:31.070 ... 00:34:31.070 fio-3.35 00:34:31.070 Starting 4 threads 00:34:36.338 00:34:36.338 filename0: (groupid=0, jobs=1): err= 0: pid=3487477: Wed Nov 20 10:52:16 2024 00:34:36.338 read: IOPS=2761, BW=21.6MiB/s (22.6MB/s)(108MiB/5002msec) 00:34:36.338 slat (nsec): min=5957, max=38809, avg=8289.58, stdev=2665.67 00:34:36.338 clat (usec): min=612, max=43455, avg=2873.83, stdev=1030.92 00:34:36.338 lat (usec): min=621, max=43494, avg=2882.12, stdev=1031.00 00:34:36.338 clat percentiles (usec): 00:34:36.338 | 1.00th=[ 1975], 5.00th=[ 2278], 10.00th=[ 2442], 20.00th=[ 2606], 00:34:36.338 | 30.00th=[ 2737], 40.00th=[ 2900], 50.00th=[ 2933], 60.00th=[ 2933], 00:34:36.338 | 70.00th=[ 2933], 80.00th=[ 2966], 90.00th=[ 3163], 95.00th=[ 3359], 00:34:36.338 | 99.00th=[ 3884], 99.50th=[ 4080], 99.90th=[ 4817], 99.95th=[43254], 00:34:36.338 | 99.99th=[43254] 00:34:36.338 bw ( KiB/s): min=20064, max=23520, per=25.45%, avg=21946.67, stdev=906.97, samples=9 00:34:36.338 iops : min= 2508, max= 2940, avg=2743.33, stdev=113.37, samples=9 00:34:36.338 lat (usec) : 750=0.01% 00:34:36.338 lat (msec) : 2=1.11%, 4=98.23%, 10=0.60%, 50=0.06% 00:34:36.338 cpu : usr=95.96%, sys=3.74%, ctx=6, majf=0, minf=0 00:34:36.338 IO depths : 1=0.2%, 2=2.7%, 4=67.4%, 8=29.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.338 complete : 0=0.0%, 4=94.1%, 8=5.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.338 issued rwts: total=13812,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.338 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:36.338 filename0: (groupid=0, jobs=1): err= 0: pid=3487478: Wed Nov 20 10:52:16 2024 00:34:36.338 read: IOPS=2617, BW=20.5MiB/s (21.4MB/s)(102MiB/5001msec) 00:34:36.338 slat (nsec): min=5976, max=34712, avg=8179.23, stdev=2739.16 00:34:36.338 clat (usec): min=1000, max=5613, avg=3032.04, stdev=405.12 00:34:36.338 lat (usec): min=1006, max=5620, avg=3040.22, stdev=404.76 00:34:36.338 clat percentiles (usec): 00:34:36.338 | 1.00th=[ 1647], 5.00th=[ 2507], 10.00th=[ 2769], 20.00th=[ 2900], 00:34:36.338 | 30.00th=[ 2933], 40.00th=[ 2933], 50.00th=[ 2933], 60.00th=[ 2966], 00:34:36.338 | 70.00th=[ 3064], 80.00th=[ 3228], 90.00th=[ 3490], 95.00th=[ 3752], 00:34:36.338 | 99.00th=[ 4359], 99.50th=[ 4686], 99.90th=[ 5211], 99.95th=[ 5473], 00:34:36.338 | 99.99th=[ 5604] 00:34:36.338 bw ( KiB/s): min=19504, max=22544, per=24.52%, avg=21144.89, stdev=856.56, samples=9 00:34:36.338 iops : min= 2438, max= 2818, avg=2643.11, stdev=107.07, samples=9 00:34:36.338 lat (msec) : 2=1.82%, 4=95.29%, 10=2.89% 00:34:36.338 cpu : usr=95.82%, sys=3.88%, ctx=6, majf=0, minf=0 00:34:36.338 IO depths : 1=0.1%, 2=1.0%, 4=72.6%, 8=26.3%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.338 complete : 0=0.0%, 4=91.2%, 8=8.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.338 issued rwts: total=13092,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.338 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:36.338 filename1: (groupid=0, jobs=1): err= 0: pid=3487479: Wed Nov 20 10:52:16 2024 00:34:36.338 read: IOPS=2694, BW=21.0MiB/s (22.1MB/s)(105MiB/5001msec) 00:34:36.338 slat (nsec): min=5966, max=29485, avg=8234.80, stdev=2706.22 00:34:36.338 clat (usec): min=928, max=42687, avg=2944.72, stdev=1032.53 00:34:36.338 lat (usec): min=939, max=42716, avg=2952.96, stdev=1032.59 00:34:36.338 clat percentiles (usec): 00:34:36.338 | 1.00th=[ 2008], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2704], 00:34:36.338 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2933], 60.00th=[ 2933], 00:34:36.338 | 70.00th=[ 2966], 80.00th=[ 3032], 90.00th=[ 3261], 95.00th=[ 3556], 00:34:36.338 | 99.00th=[ 4146], 99.50th=[ 4293], 99.90th=[ 5014], 99.95th=[42730], 00:34:36.338 | 99.99th=[42730] 00:34:36.338 bw ( KiB/s): min=19735, max=22032, per=24.89%, avg=21462.11, stdev=695.76, samples=9 00:34:36.338 iops : min= 2466, max= 2754, avg=2682.67, stdev=87.24, samples=9 00:34:36.338 lat (usec) : 1000=0.01% 00:34:36.338 lat (msec) : 2=0.93%, 4=97.51%, 10=1.50%, 50=0.06% 00:34:36.338 cpu : usr=95.72%, sys=3.98%, ctx=7, majf=0, minf=0 00:34:36.338 IO depths : 1=0.2%, 2=2.8%, 4=69.9%, 8=27.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.338 complete : 0=0.0%, 4=92.0%, 8=8.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.338 issued rwts: total=13474,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.338 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:36.338 filename1: (groupid=0, jobs=1): err= 0: pid=3487480: Wed Nov 20 10:52:16 2024 00:34:36.338 read: IOPS=2706, BW=21.1MiB/s (22.2MB/s)(106MiB/5000msec) 00:34:36.338 slat (nsec): min=5947, max=35679, avg=8393.58, stdev=2878.32 00:34:36.338 clat (usec): min=695, max=5311, avg=2932.40, stdev=390.87 00:34:36.338 lat (usec): min=707, max=5320, avg=2940.79, stdev=390.59 00:34:36.338 clat percentiles (usec): 00:34:36.338 | 1.00th=[ 1680], 5.00th=[ 2343], 10.00th=[ 2507], 20.00th=[ 2704], 00:34:36.338 | 30.00th=[ 2900], 40.00th=[ 2933], 50.00th=[ 2933], 60.00th=[ 2933], 00:34:36.338 | 70.00th=[ 2966], 80.00th=[ 3064], 90.00th=[ 3326], 95.00th=[ 3621], 00:34:36.338 | 99.00th=[ 4228], 99.50th=[ 4555], 99.90th=[ 4948], 99.95th=[ 5014], 00:34:36.338 | 99.99th=[ 5276] 00:34:36.338 bw ( KiB/s): min=20960, max=22476, per=25.14%, avg=21672.44, stdev=439.09, samples=9 00:34:36.338 iops : min= 2620, max= 2809, avg=2709.00, stdev=54.77, samples=9 00:34:36.338 lat (usec) : 750=0.01%, 1000=0.01% 00:34:36.338 lat (msec) : 2=1.74%, 4=96.33%, 10=1.91% 00:34:36.338 cpu : usr=95.72%, sys=3.98%, ctx=10, majf=0, minf=0 00:34:36.338 IO depths : 1=0.1%, 2=2.2%, 4=69.1%, 8=28.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:36.338 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.338 complete : 0=0.0%, 4=93.1%, 8=6.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:36.338 issued rwts: total=13531,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:36.338 latency : target=0, window=0, percentile=100.00%, depth=8 00:34:36.338 00:34:36.338 Run status group 0 (all jobs): 00:34:36.338 READ: bw=84.2MiB/s (88.3MB/s), 20.5MiB/s-21.6MiB/s (21.4MB/s-22.6MB/s), io=421MiB (442MB), run=5000-5002msec 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.597 10:52:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.598 10:52:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.598 00:34:36.598 real 0m24.246s 00:34:36.598 user 4m52.048s 00:34:36.598 sys 0m5.265s 00:34:36.598 10:52:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:36.598 10:52:17 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:34:36.598 ************************************ 00:34:36.598 END TEST fio_dif_rand_params 00:34:36.598 ************************************ 00:34:36.598 10:52:17 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:34:36.598 10:52:17 nvmf_dif -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:36.598 10:52:17 nvmf_dif -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:36.598 10:52:17 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:36.598 ************************************ 00:34:36.598 START TEST fio_dif_digest 00:34:36.598 ************************************ 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1129 -- # fio_dif_digest 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:36.598 bdev_null0 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:36.598 [2024-11-20 10:52:17.211641] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # config=() 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@372 -- # local subsystem config 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@374 -- # for subsystem in "${@:-1}" 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # config+=("$(cat <<-EOF 00:34:36.598 { 00:34:36.598 "params": { 00:34:36.598 "name": "Nvme$subsystem", 00:34:36.598 "trtype": "$TEST_TRANSPORT", 00:34:36.598 "traddr": "$NVMF_FIRST_TARGET_IP", 00:34:36.598 "adrfam": "ipv4", 00:34:36.598 "trsvcid": "$NVMF_PORT", 00:34:36.598 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:34:36.598 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:34:36.598 "hdgst": ${hdgst:-false}, 00:34:36.598 "ddgst": ${ddgst:-false} 00:34:36.598 }, 00:34:36.598 "method": "bdev_nvme_attach_controller" 00:34:36.598 } 00:34:36.598 EOF 00:34:36.598 )") 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local sanitizers 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # shift 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1347 -- # local asan_lib= 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@394 -- # cat 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.598 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libasan 00:34:36.599 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:36.599 10:52:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@396 -- # jq . 00:34:36.599 10:52:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@397 -- # IFS=, 00:34:36.599 10:52:17 nvmf_dif.fio_dif_digest -- nvmf/common.sh@398 -- # printf '%s\n' '{ 00:34:36.599 "params": { 00:34:36.599 "name": "Nvme0", 00:34:36.599 "trtype": "tcp", 00:34:36.599 "traddr": "10.0.0.2", 00:34:36.599 "adrfam": "ipv4", 00:34:36.599 "trsvcid": "4420", 00:34:36.599 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:34:36.599 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:34:36.599 "hdgst": true, 00:34:36.599 "ddgst": true 00:34:36.599 }, 00:34:36.599 "method": "bdev_nvme_attach_controller" 00:34:36.599 }' 00:34:36.599 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:36.599 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:36.599 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:34:36.599 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:34:36.599 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # grep libclang_rt.asan 00:34:36.599 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:34:36.599 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1349 -- # asan_lib= 00:34:36.599 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1350 -- # [[ -n '' ]] 00:34:36.599 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:34:36.599 10:52:17 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:34:36.859 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:34:36.859 ... 00:34:36.859 fio-3.35 00:34:36.859 Starting 3 threads 00:34:49.057 00:34:49.057 filename0: (groupid=0, jobs=1): err= 0: pid=3488576: Wed Nov 20 10:52:28 2024 00:34:49.057 read: IOPS=305, BW=38.2MiB/s (40.0MB/s)(383MiB/10046msec) 00:34:49.057 slat (nsec): min=6174, max=30821, avg=10753.59, stdev=1942.72 00:34:49.057 clat (usec): min=4214, max=49688, avg=9800.44, stdev=1383.01 00:34:49.057 lat (usec): min=4223, max=49699, avg=9811.19, stdev=1382.95 00:34:49.057 clat percentiles (usec): 00:34:49.057 | 1.00th=[ 7767], 5.00th=[ 8356], 10.00th=[ 8586], 20.00th=[ 8979], 00:34:49.057 | 30.00th=[ 9241], 40.00th=[ 9503], 50.00th=[ 9765], 60.00th=[ 9896], 00:34:49.057 | 70.00th=[10290], 80.00th=[10552], 90.00th=[11076], 95.00th=[11469], 00:34:49.057 | 99.00th=[12256], 99.50th=[12518], 99.90th=[14222], 99.95th=[46924], 00:34:49.057 | 99.99th=[49546] 00:34:49.057 bw ( KiB/s): min=36864, max=42752, per=35.36%, avg=39232.00, stdev=1650.47, samples=20 00:34:49.057 iops : min= 288, max= 334, avg=306.50, stdev=12.89, samples=20 00:34:49.057 lat (msec) : 10=61.30%, 20=38.64%, 50=0.07% 00:34:49.057 cpu : usr=95.69%, sys=4.00%, ctx=28, majf=0, minf=17 00:34:49.057 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.057 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.057 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.057 issued rwts: total=3067,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.057 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:49.057 filename0: (groupid=0, jobs=1): err= 0: pid=3488577: Wed Nov 20 10:52:28 2024 00:34:49.057 read: IOPS=273, BW=34.2MiB/s (35.9MB/s)(344MiB/10046msec) 00:34:49.057 slat (nsec): min=6208, max=31828, avg=11609.21, stdev=1753.13 00:34:49.057 clat (usec): min=6231, max=50181, avg=10933.47, stdev=1395.48 00:34:49.057 lat (usec): min=6241, max=50193, avg=10945.08, stdev=1395.53 00:34:49.057 clat percentiles (usec): 00:34:49.057 | 1.00th=[ 8848], 5.00th=[ 9503], 10.00th=[ 9765], 20.00th=[10159], 00:34:49.057 | 30.00th=[10421], 40.00th=[10552], 50.00th=[10814], 60.00th=[11076], 00:34:49.057 | 70.00th=[11338], 80.00th=[11731], 90.00th=[12125], 95.00th=[12518], 00:34:49.057 | 99.00th=[13435], 99.50th=[13829], 99.90th=[14877], 99.95th=[45876], 00:34:49.057 | 99.99th=[50070] 00:34:49.057 bw ( KiB/s): min=32256, max=37632, per=31.64%, avg=35098.95, stdev=1338.11, samples=19 00:34:49.057 iops : min= 252, max= 294, avg=274.21, stdev=10.45, samples=19 00:34:49.057 lat (msec) : 10=15.75%, 20=84.18%, 50=0.04%, 100=0.04% 00:34:49.057 cpu : usr=95.74%, sys=3.94%, ctx=20, majf=0, minf=25 00:34:49.057 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.058 issued rwts: total=2749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.058 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:49.058 filename0: (groupid=0, jobs=1): err= 0: pid=3488578: Wed Nov 20 10:52:28 2024 00:34:49.058 read: IOPS=288, BW=36.1MiB/s (37.9MB/s)(361MiB/10004msec) 00:34:49.058 slat (nsec): min=6242, max=39380, avg=12315.64, stdev=2605.83 00:34:49.058 clat (usec): min=4932, max=51217, avg=10366.25, stdev=1585.21 00:34:49.058 lat (usec): min=4940, max=51252, avg=10378.56, stdev=1585.61 00:34:49.058 clat percentiles (usec): 00:34:49.058 | 1.00th=[ 8455], 5.00th=[ 8979], 10.00th=[ 9241], 20.00th=[ 9634], 00:34:49.058 | 30.00th=[ 9896], 40.00th=[10159], 50.00th=[10290], 60.00th=[10421], 00:34:49.058 | 70.00th=[10683], 80.00th=[11076], 90.00th=[11469], 95.00th=[11863], 00:34:49.058 | 99.00th=[12518], 99.50th=[12911], 99.90th=[51119], 99.95th=[51119], 00:34:49.058 | 99.99th=[51119] 00:34:49.058 bw ( KiB/s): min=33536, max=39424, per=33.25%, avg=36890.95, stdev=1545.20, samples=19 00:34:49.058 iops : min= 262, max= 308, avg=288.21, stdev=12.07, samples=19 00:34:49.058 lat (msec) : 10=35.28%, 20=64.61%, 100=0.10% 00:34:49.058 cpu : usr=88.45%, sys=7.55%, ctx=1971, majf=0, minf=38 00:34:49.058 IO depths : 1=0.2%, 2=99.8%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:34:49.058 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.058 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:34:49.058 issued rwts: total=2891,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:34:49.058 latency : target=0, window=0, percentile=100.00%, depth=3 00:34:49.058 00:34:49.058 Run status group 0 (all jobs): 00:34:49.058 READ: bw=108MiB/s (114MB/s), 34.2MiB/s-38.2MiB/s (35.9MB/s-40.0MB/s), io=1088MiB (1141MB), run=10004-10046msec 00:34:49.058 10:52:28 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:34:49.058 10:52:28 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:34:49.058 10:52:28 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:34:49.058 10:52:28 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:34:49.058 10:52:28 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:34:49.058 10:52:28 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:34:49.058 10:52:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.058 10:52:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:49.058 10:52:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.058 10:52:28 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:34:49.058 10:52:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@563 -- # xtrace_disable 00:34:49.058 10:52:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:49.058 10:52:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:34:49.058 00:34:49.058 real 0m11.395s 00:34:49.058 user 0m34.940s 00:34:49.058 sys 0m1.917s 00:34:49.058 10:52:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:49.058 10:52:28 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:34:49.058 ************************************ 00:34:49.058 END TEST fio_dif_digest 00:34:49.058 ************************************ 00:34:49.058 10:52:28 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:34:49.058 10:52:28 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:34:49.058 10:52:28 nvmf_dif -- nvmf/common.sh@335 -- # nvmfcleanup 00:34:49.058 10:52:28 nvmf_dif -- nvmf/common.sh@99 -- # sync 00:34:49.058 10:52:28 nvmf_dif -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:34:49.058 10:52:28 nvmf_dif -- nvmf/common.sh@102 -- # set +e 00:34:49.058 10:52:28 nvmf_dif -- nvmf/common.sh@103 -- # for i in {1..20} 00:34:49.058 10:52:28 nvmf_dif -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:34:49.058 rmmod nvme_tcp 00:34:49.058 rmmod nvme_fabrics 00:34:49.058 rmmod nvme_keyring 00:34:49.058 10:52:28 nvmf_dif -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:34:49.058 10:52:28 nvmf_dif -- nvmf/common.sh@106 -- # set -e 00:34:49.058 10:52:28 nvmf_dif -- nvmf/common.sh@107 -- # return 0 00:34:49.058 10:52:28 nvmf_dif -- nvmf/common.sh@336 -- # '[' -n 3480157 ']' 00:34:49.058 10:52:28 nvmf_dif -- nvmf/common.sh@337 -- # killprocess 3480157 00:34:49.058 10:52:28 nvmf_dif -- common/autotest_common.sh@954 -- # '[' -z 3480157 ']' 00:34:49.058 10:52:28 nvmf_dif -- common/autotest_common.sh@958 -- # kill -0 3480157 00:34:49.058 10:52:28 nvmf_dif -- common/autotest_common.sh@959 -- # uname 00:34:49.058 10:52:28 nvmf_dif -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:34:49.058 10:52:28 nvmf_dif -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3480157 00:34:49.058 10:52:28 nvmf_dif -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:34:49.058 10:52:28 nvmf_dif -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:34:49.058 10:52:28 nvmf_dif -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3480157' 00:34:49.058 killing process with pid 3480157 00:34:49.058 10:52:28 nvmf_dif -- common/autotest_common.sh@973 -- # kill 3480157 00:34:49.058 10:52:28 nvmf_dif -- common/autotest_common.sh@978 -- # wait 3480157 00:34:49.058 10:52:28 nvmf_dif -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:34:49.058 10:52:28 nvmf_dif -- nvmf/common.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:34:50.963 Waiting for block devices as requested 00:34:50.963 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:34:51.222 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:51.222 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:51.222 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:51.222 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:51.481 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:51.481 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:51.481 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:51.740 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:51.740 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:34:51.740 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:34:51.999 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:34:51.999 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:34:51.999 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:34:51.999 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:34:52.258 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:34:52.258 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:34:52.258 10:52:32 nvmf_dif -- nvmf/common.sh@342 -- # nvmf_fini 00:34:52.258 10:52:32 nvmf_dif -- nvmf/setup.sh@264 -- # local dev 00:34:52.258 10:52:32 nvmf_dif -- nvmf/setup.sh@267 -- # remove_target_ns 00:34:52.258 10:52:32 nvmf_dif -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:52.258 10:52:32 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:34:52.258 10:52:32 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@268 -- # delete_main_bridge 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@130 -- # return 0 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@41 -- # _dev=0 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@41 -- # dev_map=() 00:34:54.795 10:52:35 nvmf_dif -- nvmf/setup.sh@284 -- # iptr 00:34:54.795 10:52:35 nvmf_dif -- nvmf/common.sh@542 -- # iptables-save 00:34:54.795 10:52:35 nvmf_dif -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:34:54.795 10:52:35 nvmf_dif -- nvmf/common.sh@542 -- # iptables-restore 00:34:54.795 00:34:54.795 real 1m14.355s 00:34:54.795 user 7m9.134s 00:34:54.795 sys 0m21.012s 00:34:54.795 10:52:35 nvmf_dif -- common/autotest_common.sh@1130 -- # xtrace_disable 00:34:54.795 10:52:35 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:34:54.795 ************************************ 00:34:54.795 END TEST nvmf_dif 00:34:54.795 ************************************ 00:34:54.795 10:52:35 -- spdk/autotest.sh@290 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:54.795 10:52:35 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:34:54.795 10:52:35 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:34:54.795 10:52:35 -- common/autotest_common.sh@10 -- # set +x 00:34:54.795 ************************************ 00:34:54.795 START TEST nvmf_abort_qd_sizes 00:34:54.795 ************************************ 00:34:54.795 10:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:34:54.795 * Looking for test storage... 00:34:54.795 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:34:54.795 10:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:34:54.795 10:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lcov --version 00:34:54.795 10:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:34:54.795 10:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:34:54.795 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:34:54.795 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@333 -- # local ver1 ver1_l 00:34:54.795 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@334 -- # local ver2 ver2_l 00:34:54.795 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # IFS=.-: 00:34:54.795 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@336 -- # read -ra ver1 00:34:54.795 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # IFS=.-: 00:34:54.795 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@337 -- # read -ra ver2 00:34:54.795 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@338 -- # local 'op=<' 00:34:54.795 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@340 -- # ver1_l=2 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@341 -- # ver2_l=1 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@344 -- # case "$op" in 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@345 -- # : 1 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v = 0 )) 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # decimal 1 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=1 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 1 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@365 -- # ver1[v]=1 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # decimal 2 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@353 -- # local d=2 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@355 -- # echo 2 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@366 -- # ver2[v]=2 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@368 -- # return 0 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:34:54.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.796 --rc genhtml_branch_coverage=1 00:34:54.796 --rc genhtml_function_coverage=1 00:34:54.796 --rc genhtml_legend=1 00:34:54.796 --rc geninfo_all_blocks=1 00:34:54.796 --rc geninfo_unexecuted_blocks=1 00:34:54.796 00:34:54.796 ' 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:34:54.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.796 --rc genhtml_branch_coverage=1 00:34:54.796 --rc genhtml_function_coverage=1 00:34:54.796 --rc genhtml_legend=1 00:34:54.796 --rc geninfo_all_blocks=1 00:34:54.796 --rc geninfo_unexecuted_blocks=1 00:34:54.796 00:34:54.796 ' 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:34:54.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.796 --rc genhtml_branch_coverage=1 00:34:54.796 --rc genhtml_function_coverage=1 00:34:54.796 --rc genhtml_legend=1 00:34:54.796 --rc geninfo_all_blocks=1 00:34:54.796 --rc geninfo_unexecuted_blocks=1 00:34:54.796 00:34:54.796 ' 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:34:54.796 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:34:54.796 --rc genhtml_branch_coverage=1 00:34:54.796 --rc genhtml_function_coverage=1 00:34:54.796 --rc genhtml_legend=1 00:34:54.796 --rc geninfo_all_blocks=1 00:34:54.796 --rc geninfo_unexecuted_blocks=1 00:34:54.796 00:34:54.796 ' 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@15 -- # shopt -s extglob 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@50 -- # : 0 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:34:54.796 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@54 -- # have_pci_nics=0 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # '[' -z tcp ']' 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@294 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # prepare_net_devs 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # local -g is_hw=no 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # remove_target_ns 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # [[ phy != virt ]] 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # gather_supported_nvmf_pci_devs 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # xtrace_disable 00:34:54.796 10:52:35 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@129 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@131 -- # pci_devs=() 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@131 -- # local -a pci_devs 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@132 -- # pci_net_devs=() 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@132 -- # local -a pci_net_devs 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@133 -- # pci_drivers=() 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@133 -- # local -A pci_drivers 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@135 -- # net_devs=() 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@135 -- # local -ga net_devs 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@136 -- # e810=() 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@136 -- # local -ga e810 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@137 -- # x722=() 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@137 -- # local -ga x722 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@138 -- # mlx=() 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@138 -- # local -ga mlx 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@141 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@142 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@144 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@146 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@148 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@150 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@152 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@154 -- # mlx+=(${pci_bus_cache["$mellanox:0x101b"]}) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@156 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@157 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@159 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@160 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@162 -- # pci_devs+=("${e810[@]}") 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@163 -- # [[ tcp == rdma ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@169 -- # [[ e810 == mlx5 ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@171 -- # [[ e810 == e810 ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@172 -- # pci_devs=("${e810[@]}") 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@177 -- # (( 2 == 0 )) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:35:01.366 Found 0000:86:00.0 (0x8086 - 0x159b) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@182 -- # for pci in "${pci_devs[@]}" 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@183 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:35:01.366 Found 0000:86:00.1 (0x8086 - 0x159b) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@184 -- # [[ ice == unknown ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@188 -- # [[ ice == unbound ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@192 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@193 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@194 -- # [[ tcp == rdma ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@208 -- # (( 0 > 0 )) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # [[ e810 == e810 ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@214 -- # [[ tcp == rdma ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:35:01.366 Found net devices under 0000:86:00.0: cvl_0_0 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@226 -- # for pci in "${pci_devs[@]}" 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@227 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@232 -- # [[ tcp == tcp ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@233 -- # for net_dev in "${!pci_net_devs[@]}" 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # [[ up == up ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@238 -- # (( 1 == 0 )) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:35:01.366 Found net devices under 0000:86:00.1: cvl_0_1 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # net_devs+=("${pci_net_devs[@]}") 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # (( 2 == 0 )) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@252 -- # [[ tcp == rdma ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@262 -- # is_hw=yes 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # [[ yes == yes ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@265 -- # [[ tcp == tcp ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/common.sh@266 -- # nvmf_tcp_init 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@255 -- # local total_initiator_target_pairs=1 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@257 -- # create_target_ns 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@142 -- # local ns=nvmf_ns_spdk 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@144 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@145 -- # ip netns add nvmf_ns_spdk 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@146 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@148 -- # set_up lo NVMF_TARGET_NS_CMD 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # local dev=lo in_ns=NVMF_TARGET_NS_CMD 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set lo up' 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set lo up 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@258 -- # setup_interfaces 1 phy 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@25 -- # local no=1 type=phy transport=tcp ip_pool=0x0a000001 max 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@27 -- # local -gA dev_map 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@28 -- # local -g _dev 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@31 -- # (( ip_pool += _dev * 2, (_dev + no) * 2 <= 255 )) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev = _dev, max = _dev )) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@34 -- # setup_interface_pair 0 phy 167772161 tcp 00:35:01.366 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # ips=() 00:35:01.367 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@44 -- # local id=0 type=phy ip=167772161 transport=tcp ips 00:35:01.367 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@45 -- # local initiator=initiator0 target=target0 _ns= 00:35:01.367 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@46 -- # local key_initiator=initiator0 key_target=target0 00:35:01.367 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@49 -- # ips=("$ip" $((++ip))) 00:35:01.367 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@51 -- # [[ tcp == tcp ]] 00:35:01.367 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@51 -- # _ns=NVMF_TARGET_NS_CMD 00:35:01.367 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@53 -- # [[ tcp == rdma ]] 00:35:01.367 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@61 -- # [[ phy == phy ]] 00:35:01.367 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # initiator=cvl_0_0 00:35:01.367 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@64 -- # target=cvl_0_1 00:35:01.367 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@67 -- # [[ phy == veth ]] 00:35:01.367 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@68 -- # [[ phy == veth ]] 00:35:01.367 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # [[ tcp == tcp ]] 00:35:01.367 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@70 -- # add_to_ns cvl_0_1 00:35:01.367 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@152 -- # local dev=cvl_0_1 ns=nvmf_ns_spdk 00:35:01.367 10:52:40 nvmf_abort_qd_sizes -- nvmf/setup.sh@153 -- # ip link set cvl_0_1 netns nvmf_ns_spdk 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@72 -- # set_ip cvl_0_0 167772161 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=cvl_0_0 ip=167772161 in_ns= 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n '' ]] 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # val_to_ip 167772161 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772161 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 1 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip=10.0.0.1 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@208 -- # eval ' ip addr add 10.0.0.1/24 dev cvl_0_0' 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@208 -- # ip addr add 10.0.0.1/24 dev cvl_0_0 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.1 | tee /sys/class/net/cvl_0_0/ifalias' 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # echo 10.0.0.1 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # tee /sys/class/net/cvl_0_0/ifalias 00:35:01.367 10.0.0.1 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@73 -- # set_ip cvl_0_1 167772162 NVMF_TARGET_NS_CMD 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@204 -- # local dev=cvl_0_1 ip=167772162 in_ns=NVMF_TARGET_NS_CMD 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@205 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # val_to_ip 167772162 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@11 -- # local val=167772162 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@13 -- # printf '%u.%u.%u.%u\n' 10 0 0 2 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@207 -- # ip=10.0.0.2 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@208 -- # eval 'ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1' 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@208 -- # ip netns exec nvmf_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_1 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # eval 'echo 10.0.0.2 | ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias' 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # echo 10.0.0.2 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@210 -- # ip netns exec nvmf_ns_spdk tee /sys/class/net/cvl_0_1/ifalias 00:35:01.367 10.0.0.2 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@75 -- # set_up cvl_0_0 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # local dev=cvl_0_0 in_ns= 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # [[ -n '' ]] 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # eval ' ip link set cvl_0_0 up' 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # ip link set cvl_0_0 up 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@76 -- # set_up cvl_0_1 NVMF_TARGET_NS_CMD 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@214 -- # local dev=cvl_0_1 in_ns=NVMF_TARGET_NS_CMD 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@215 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # eval 'ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up' 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@217 -- # ip netns exec nvmf_ns_spdk ip link set cvl_0_1 up 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@78 -- # [[ phy == veth ]] 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@79 -- # [[ phy == veth ]] 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@81 -- # [[ tcp == tcp ]] 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@82 -- # ipts -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@541 -- # iptables -I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT -m comment --comment 'SPDK_NVMF:-I INPUT 1 -i cvl_0_0 -p tcp --dport 4420 -j ACCEPT' 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@85 -- # dev_map["$key_initiator"]=cvl_0_0 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@85 -- # dev_map["$key_target"]=cvl_0_1 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev++, ip_pool += 2 )) 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@33 -- # (( _dev < max + no )) 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@38 -- # ping_ips 1 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@96 -- # local pairs=1 pair 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # (( pair = 0 )) 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@99 -- # get_tcp_initiator_ip_address 0 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@187 -- # get_initiator_ip_address 0 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=initiator0 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@99 -- # ping_ip 10.0.0.1 NVMF_TARGET_NS_CMD 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # local ip=10.0.0.1 in_ns=NVMF_TARGET_NS_CMD count=1 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@92 -- # eval 'ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1' 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@92 -- # ip netns exec nvmf_ns_spdk ping -c 1 10.0.0.1 00:35:01.367 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:35:01.367 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.491 ms 00:35:01.367 00:35:01.367 --- 10.0.0.1 ping statistics --- 00:35:01.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.367 rtt min/avg/max/mdev = 0.491/0.491/0.491/0.000 ms 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # get_tcp_target_ip_address 0 NVMF_TARGET_NS_CMD 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@196 -- # get_target_ip_address 0 NVMF_TARGET_NS_CMD 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev target0 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=target0 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@100 -- # ping_ip 10.0.0.2 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@89 -- # local ip=10.0.0.2 in_ns= count=1 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@90 -- # [[ -n '' ]] 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@92 -- # eval ' ping -c 1 10.0.0.2' 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@92 -- # ping -c 1 10.0.0.2 00:35:01.367 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:35:01.367 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:35:01.367 00:35:01.367 --- 10.0.0.2 ping statistics --- 00:35:01.367 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:35:01.367 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # (( pair++ )) 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@98 -- # (( pair < pairs )) 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/setup.sh@260 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # return 0 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # '[' iso == iso ']' 00:35:01.367 10:52:41 nvmf_abort_qd_sizes -- nvmf/common.sh@299 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:03.273 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:03.273 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:03.533 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:03.533 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:03.533 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:03.533 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:03.533 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:03.533 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:03.533 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:03.533 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:03.533 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:03.533 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:03.533 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:03.533 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:03.533 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:03.533 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:04.911 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # nvmf_legacy_env 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@331 -- # NVMF_TARGET_INTERFACE=cvl_0_1 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@332 -- # NVMF_TARGET_INTERFACE2= 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@334 -- # get_tcp_initiator_ip_address 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@187 -- # get_initiator_ip_address '' 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@183 -- # get_ip_address initiator0 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=initiator0 in_ns= ip 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev initiator0 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=initiator0 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n initiator0 ]] 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n cvl_0_0 ]] 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # echo cvl_0_0 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev=cvl_0_0 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # eval ' cat /sys/class/net/cvl_0_0/ifalias' 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # cat /sys/class/net/cvl_0_0/ifalias 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip=10.0.0.1 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.1 ]] 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@175 -- # echo 10.0.0.1 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@334 -- # NVMF_FIRST_INITIATOR_IP=10.0.0.1 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@335 -- # get_tcp_initiator_ip_address 1 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@187 -- # get_initiator_ip_address 1 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@183 -- # get_ip_address initiator1 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=initiator1 in_ns= ip 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n '' ]] 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev initiator1 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=initiator1 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n initiator1 ]] 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # return 1 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev= 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@169 -- # return 0 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@335 -- # NVMF_SECOND_INITIATOR_IP= 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@337 -- # get_tcp_target_ip_address 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@196 -- # get_target_ip_address '' NVMF_TARGET_NS_CMD 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@179 -- # get_ip_address target0 NVMF_TARGET_NS_CMD 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=target0 in_ns=NVMF_TARGET_NS_CMD ip 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev target0 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=target0 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n target0 ]] 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n cvl_0_1 ]] 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@110 -- # echo cvl_0_1 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev=cvl_0_1 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # eval 'ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias' 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip netns exec nvmf_ns_spdk cat /sys/class/net/cvl_0_1/ifalias 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@172 -- # ip=10.0.0.2 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@173 -- # [[ -n 10.0.0.2 ]] 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@175 -- # echo 10.0.0.2 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@337 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@338 -- # get_tcp_target_ip_address 1 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@196 -- # get_target_ip_address 1 NVMF_TARGET_NS_CMD 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@179 -- # get_ip_address target1 NVMF_TARGET_NS_CMD 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@165 -- # local dev=target1 in_ns=NVMF_TARGET_NS_CMD ip 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # [[ -n NVMF_TARGET_NS_CMD ]] 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@166 -- # local -n ns=NVMF_TARGET_NS_CMD 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # get_net_dev target1 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@107 -- # local dev=target1 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n target1 ]] 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # [[ -n '' ]] 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@109 -- # return 1 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@168 -- # dev= 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@169 -- # return 0 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/setup.sh@338 -- # NVMF_SECOND_TARGET_IP= 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@309 -- # [[ tcp == \r\d\m\a ]] 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@311 -- # [[ tcp == \t\c\p ]] 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # '[' tcp == tcp ']' 00:35:04.911 10:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # modprobe nvme-tcp 00:35:05.171 10:52:45 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:35:05.171 10:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@326 -- # timing_enter start_nvmf_tgt 00:35:05.171 10:52:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:05.171 10:52:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:05.171 10:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@328 -- # nvmfpid=3496648 00:35:05.171 10:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # ip netns exec nvmf_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:35:05.171 10:52:45 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # waitforlisten 3496648 00:35:05.171 10:52:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@835 -- # '[' -z 3496648 ']' 00:35:05.171 10:52:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:05.171 10:52:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:05.171 10:52:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:05.171 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:05.171 10:52:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:05.171 10:52:45 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:05.171 [2024-11-20 10:52:45.735288] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:35:05.171 [2024-11-20 10:52:45.735335] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:35:05.171 [2024-11-20 10:52:45.811297] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:35:05.171 [2024-11-20 10:52:45.854261] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:35:05.171 [2024-11-20 10:52:45.854298] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:35:05.171 [2024-11-20 10:52:45.854305] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:35:05.171 [2024-11-20 10:52:45.854311] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:35:05.171 [2024-11-20 10:52:45.854316] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:35:05.171 [2024-11-20 10:52:45.855918] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:05.171 [2024-11-20 10:52:45.856024] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 2 00:35:05.171 [2024-11-20 10:52:45.856133] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:05.171 [2024-11-20 10:52:45.856134] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 3 00:35:06.105 10:52:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:06.105 10:52:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@868 -- # return 0 00:35:06.105 10:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # timing_exit start_nvmf_tgt 00:35:06.105 10:52:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:06.105 10:52:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- nvmf/common.sh@331 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # local bdf bdfs 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # local nvmes 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- scripts/common.sh@315 -- # [[ -n 0000:5e:00.0 ]] 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- scripts/common.sh@316 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # uname -s 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- scripts/common.sh@328 -- # (( 1 )) 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- scripts/common.sh@329 -- # printf '%s\n' 0000:5e:00.0 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:06.106 10:52:46 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:06.106 ************************************ 00:35:06.106 START TEST spdk_target_abort 00:35:06.106 ************************************ 00:35:06.106 10:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1129 -- # spdk_target 00:35:06.106 10:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:35:06.106 10:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:35:06.106 10:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:06.106 10:52:46 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:09.393 spdk_targetn1 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:09.393 [2024-11-20 10:52:49.497335] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:09.393 [2024-11-20 10:52:49.534882] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:09.393 10:52:49 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:12.681 Initializing NVMe Controllers 00:35:12.681 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:12.681 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:12.681 Initialization complete. Launching workers. 00:35:12.681 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 16567, failed: 0 00:35:12.681 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1318, failed to submit 15249 00:35:12.681 success 747, unsuccessful 571, failed 0 00:35:12.681 10:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:12.681 10:52:52 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:15.970 Initializing NVMe Controllers 00:35:15.970 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:15.970 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:15.970 Initialization complete. Launching workers. 00:35:15.970 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8662, failed: 0 00:35:15.970 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1234, failed to submit 7428 00:35:15.970 success 297, unsuccessful 937, failed 0 00:35:15.970 10:52:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:15.970 10:52:56 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:19.258 Initializing NVMe Controllers 00:35:19.258 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:35:19.258 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:19.258 Initialization complete. Launching workers. 00:35:19.258 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 39085, failed: 0 00:35:19.258 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2843, failed to submit 36242 00:35:19.258 success 571, unsuccessful 2272, failed 0 00:35:19.258 10:52:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:35:19.258 10:52:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.258 10:52:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:19.258 10:52:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:19.258 10:52:59 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:35:19.258 10:52:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:19.258 10:52:59 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:20.637 10:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:20.637 10:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 3496648 00:35:20.637 10:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # '[' -z 3496648 ']' 00:35:20.637 10:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # kill -0 3496648 00:35:20.637 10:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # uname 00:35:20.637 10:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:20.637 10:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3496648 00:35:20.637 10:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:20.637 10:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:20.637 10:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3496648' 00:35:20.637 killing process with pid 3496648 00:35:20.637 10:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@973 -- # kill 3496648 00:35:20.637 10:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@978 -- # wait 3496648 00:35:20.637 00:35:20.637 real 0m14.685s 00:35:20.637 user 0m58.526s 00:35:20.637 sys 0m2.622s 00:35:20.637 10:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:20.637 10:53:01 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:20.637 ************************************ 00:35:20.637 END TEST spdk_target_abort 00:35:20.637 ************************************ 00:35:20.896 10:53:01 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:35:20.896 10:53:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:20.896 10:53:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:20.896 10:53:01 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:20.896 ************************************ 00:35:20.896 START TEST kernel_target_abort 00:35:20.896 ************************************ 00:35:20.896 10:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1129 -- # kernel_target 00:35:20.896 10:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:35:20.897 10:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@434 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:35:20.897 10:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@436 -- # nvmet=/sys/kernel/config/nvmet 00:35:20.897 10:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@437 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:20.897 10:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@438 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:20.897 10:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@439 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:35:20.897 10:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@441 -- # local block nvme 00:35:20.897 10:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@443 -- # [[ ! -e /sys/module/nvmet ]] 00:35:20.897 10:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@444 -- # modprobe nvmet 00:35:20.897 10:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@447 -- # [[ -e /sys/kernel/config/nvmet ]] 00:35:20.897 10:53:01 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@449 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:23.435 Waiting for block devices as requested 00:35:23.435 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:23.694 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:23.694 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:23.954 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:23.954 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:23.954 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:23.954 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:24.214 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:24.214 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:24.214 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:24.473 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:24.473 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:24.473 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:24.473 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:24.732 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:24.732 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:24.732 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:24.991 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@452 -- # for block in /sys/block/nvme* 00:35:24.991 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@453 -- # [[ -e /sys/block/nvme0n1 ]] 00:35:24.991 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@454 -- # is_block_zoned nvme0n1 00:35:24.991 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:35:24.991 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:35:24.991 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:35:24.991 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # block_in_use nvme0n1 00:35:24.991 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@381 -- # local block=nvme0n1 pt 00:35:24.991 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@390 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:35:24.991 No valid GPT data, bailing 00:35:24.991 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:35:24.991 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@394 -- # pt= 00:35:24.991 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@395 -- # return 1 00:35:24.991 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@455 -- # nvme=/dev/nvme0n1 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@458 -- # [[ -b /dev/nvme0n1 ]] 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@460 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@461 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@462 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@467 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@469 -- # echo 1 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@470 -- # echo /dev/nvme0n1 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@471 -- # echo 1 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@473 -- # echo 10.0.0.1 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@474 -- # echo tcp 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@475 -- # echo 4420 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@476 -- # echo ipv4 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@479 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@482 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 --hostid=00ad29c2-ccbd-e911-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:35:24.992 00:35:24.992 Discovery Log Number of Records 2, Generation counter 2 00:35:24.992 =====Discovery Log Entry 0====== 00:35:24.992 trtype: tcp 00:35:24.992 adrfam: ipv4 00:35:24.992 subtype: current discovery subsystem 00:35:24.992 treq: not specified, sq flow control disable supported 00:35:24.992 portid: 1 00:35:24.992 trsvcid: 4420 00:35:24.992 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:35:24.992 traddr: 10.0.0.1 00:35:24.992 eflags: none 00:35:24.992 sectype: none 00:35:24.992 =====Discovery Log Entry 1====== 00:35:24.992 trtype: tcp 00:35:24.992 adrfam: ipv4 00:35:24.992 subtype: nvme subsystem 00:35:24.992 treq: not specified, sq flow control disable supported 00:35:24.992 portid: 1 00:35:24.992 trsvcid: 4420 00:35:24.992 subnqn: nqn.2016-06.io.spdk:testnqn 00:35:24.992 traddr: 10.0.0.1 00:35:24.992 eflags: none 00:35:24.992 sectype: none 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:24.992 10:53:05 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:28.319 Initializing NVMe Controllers 00:35:28.319 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:28.319 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:28.319 Initialization complete. Launching workers. 00:35:28.319 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 95587, failed: 0 00:35:28.319 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 95587, failed to submit 0 00:35:28.319 success 0, unsuccessful 95587, failed 0 00:35:28.319 10:53:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:28.319 10:53:08 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:31.691 Initializing NVMe Controllers 00:35:31.691 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:31.691 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:31.691 Initialization complete. Launching workers. 00:35:31.691 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 148072, failed: 0 00:35:31.691 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 37390, failed to submit 110682 00:35:31.691 success 0, unsuccessful 37390, failed 0 00:35:31.691 10:53:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:35:31.691 10:53:11 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:35:34.979 Initializing NVMe Controllers 00:35:34.979 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:35:34.979 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:35:34.979 Initialization complete. Launching workers. 00:35:34.979 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 141492, failed: 0 00:35:34.979 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 35414, failed to submit 106078 00:35:34.979 success 0, unsuccessful 35414, failed 0 00:35:34.979 10:53:15 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:35:34.979 10:53:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@486 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:35:34.979 10:53:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@488 -- # echo 0 00:35:34.979 10:53:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@490 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:34.979 10:53:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@491 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:35:34.979 10:53:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@492 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:35:34.979 10:53:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@493 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:35:34.979 10:53:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@495 -- # modules=(/sys/module/nvmet/holders/*) 00:35:34.979 10:53:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@497 -- # modprobe -r nvmet_tcp nvmet 00:35:34.979 10:53:15 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@500 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:35:37.515 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:37.515 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:37.515 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:37.515 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:37.515 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:37.515 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:37.515 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:37.515 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:37.515 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:35:37.515 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:35:37.515 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:35:37.515 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:35:37.515 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:35:37.515 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:35:37.515 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:35:37.515 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:35:38.894 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:35:38.894 00:35:38.894 real 0m18.062s 00:35:38.894 user 0m9.133s 00:35:38.894 sys 0m5.110s 00:35:38.894 10:53:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:38.894 10:53:19 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:35:38.894 ************************************ 00:35:38.894 END TEST kernel_target_abort 00:35:38.894 ************************************ 00:35:38.894 10:53:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:35:38.894 10:53:19 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:35:38.894 10:53:19 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # nvmfcleanup 00:35:38.894 10:53:19 nvmf_abort_qd_sizes -- nvmf/common.sh@99 -- # sync 00:35:38.894 10:53:19 nvmf_abort_qd_sizes -- nvmf/common.sh@101 -- # '[' tcp == tcp ']' 00:35:38.894 10:53:19 nvmf_abort_qd_sizes -- nvmf/common.sh@102 -- # set +e 00:35:38.894 10:53:19 nvmf_abort_qd_sizes -- nvmf/common.sh@103 -- # for i in {1..20} 00:35:38.894 10:53:19 nvmf_abort_qd_sizes -- nvmf/common.sh@104 -- # modprobe -v -r nvme-tcp 00:35:38.894 rmmod nvme_tcp 00:35:38.894 rmmod nvme_fabrics 00:35:38.894 rmmod nvme_keyring 00:35:38.894 10:53:19 nvmf_abort_qd_sizes -- nvmf/common.sh@105 -- # modprobe -v -r nvme-fabrics 00:35:38.894 10:53:19 nvmf_abort_qd_sizes -- nvmf/common.sh@106 -- # set -e 00:35:38.894 10:53:19 nvmf_abort_qd_sizes -- nvmf/common.sh@107 -- # return 0 00:35:38.894 10:53:19 nvmf_abort_qd_sizes -- nvmf/common.sh@336 -- # '[' -n 3496648 ']' 00:35:38.894 10:53:19 nvmf_abort_qd_sizes -- nvmf/common.sh@337 -- # killprocess 3496648 00:35:38.894 10:53:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@954 -- # '[' -z 3496648 ']' 00:35:38.894 10:53:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@958 -- # kill -0 3496648 00:35:38.894 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 958: kill: (3496648) - No such process 00:35:38.894 10:53:19 nvmf_abort_qd_sizes -- common/autotest_common.sh@981 -- # echo 'Process with pid 3496648 is not found' 00:35:38.894 Process with pid 3496648 is not found 00:35:38.894 10:53:19 nvmf_abort_qd_sizes -- nvmf/common.sh@339 -- # '[' iso == iso ']' 00:35:38.894 10:53:19 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:35:42.182 Waiting for block devices as requested 00:35:42.182 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:35:42.182 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:42.182 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:42.182 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:42.182 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:42.182 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:42.182 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:42.440 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:42.440 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:42.440 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:35:42.440 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:35:42.698 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:35:42.698 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:35:42.698 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:35:42.957 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:35:42.957 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:35:42.957 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:35:43.215 10:53:23 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # nvmf_fini 00:35:43.215 10:53:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@264 -- # local dev 00:35:43.215 10:53:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@267 -- # remove_target_ns 00:35:43.215 10:53:23 nvmf_abort_qd_sizes -- nvmf/setup.sh@323 -- # xtrace_disable_per_cmd _remove_target_ns 00:35:43.215 10:53:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_target_ns 13> /dev/null' 00:35:43.215 10:53:23 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_target_ns 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@268 -- # delete_main_bridge 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@130 -- # [[ -e /sys/class/net/nvmf_br/address ]] 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@130 -- # return 0 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_0/address ]] 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@279 -- # flush_ip cvl_0_0 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@221 -- # local dev=cvl_0_0 in_ns= 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_0' 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_0 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@270 -- # for dev in "${dev_map[@]}" 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@271 -- # [[ -e /sys/class/net/cvl_0_1/address ]] 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@275 -- # (( 4 == 3 )) 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@279 -- # flush_ip cvl_0_1 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@221 -- # local dev=cvl_0_1 in_ns= 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@222 -- # [[ -n '' ]] 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # eval ' ip addr flush dev cvl_0_1' 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@224 -- # ip addr flush dev cvl_0_1 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@283 -- # reset_setup_interfaces 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # _dev=0 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@41 -- # dev_map=() 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/setup.sh@284 -- # iptr 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/common.sh@542 -- # iptables-save 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/common.sh@542 -- # grep -v SPDK_NVMF 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- nvmf/common.sh@542 -- # iptables-restore 00:35:45.120 00:35:45.120 real 0m50.645s 00:35:45.120 user 1m12.273s 00:35:45.120 sys 0m16.523s 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:45.120 10:53:25 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:35:45.120 ************************************ 00:35:45.120 END TEST nvmf_abort_qd_sizes 00:35:45.120 ************************************ 00:35:45.120 10:53:25 -- spdk/autotest.sh@292 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:45.120 10:53:25 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:35:45.120 10:53:25 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:45.120 10:53:25 -- common/autotest_common.sh@10 -- # set +x 00:35:45.120 ************************************ 00:35:45.120 START TEST keyring_file 00:35:45.120 ************************************ 00:35:45.120 10:53:25 keyring_file -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:35:45.380 * Looking for test storage... 00:35:45.380 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:45.380 10:53:25 keyring_file -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:45.380 10:53:25 keyring_file -- common/autotest_common.sh@1693 -- # lcov --version 00:35:45.380 10:53:25 keyring_file -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:45.380 10:53:25 keyring_file -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:45.380 10:53:25 keyring_file -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:45.380 10:53:25 keyring_file -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:45.380 10:53:25 keyring_file -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:45.380 10:53:25 keyring_file -- scripts/common.sh@336 -- # IFS=.-: 00:35:45.380 10:53:25 keyring_file -- scripts/common.sh@336 -- # read -ra ver1 00:35:45.380 10:53:25 keyring_file -- scripts/common.sh@337 -- # IFS=.-: 00:35:45.380 10:53:25 keyring_file -- scripts/common.sh@337 -- # read -ra ver2 00:35:45.380 10:53:25 keyring_file -- scripts/common.sh@338 -- # local 'op=<' 00:35:45.380 10:53:25 keyring_file -- scripts/common.sh@340 -- # ver1_l=2 00:35:45.380 10:53:25 keyring_file -- scripts/common.sh@341 -- # ver2_l=1 00:35:45.380 10:53:25 keyring_file -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:45.380 10:53:25 keyring_file -- scripts/common.sh@344 -- # case "$op" in 00:35:45.380 10:53:25 keyring_file -- scripts/common.sh@345 -- # : 1 00:35:45.380 10:53:25 keyring_file -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:45.380 10:53:25 keyring_file -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:45.380 10:53:26 keyring_file -- scripts/common.sh@365 -- # decimal 1 00:35:45.380 10:53:26 keyring_file -- scripts/common.sh@353 -- # local d=1 00:35:45.380 10:53:26 keyring_file -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:45.380 10:53:26 keyring_file -- scripts/common.sh@355 -- # echo 1 00:35:45.380 10:53:26 keyring_file -- scripts/common.sh@365 -- # ver1[v]=1 00:35:45.380 10:53:26 keyring_file -- scripts/common.sh@366 -- # decimal 2 00:35:45.380 10:53:26 keyring_file -- scripts/common.sh@353 -- # local d=2 00:35:45.380 10:53:26 keyring_file -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:45.380 10:53:26 keyring_file -- scripts/common.sh@355 -- # echo 2 00:35:45.380 10:53:26 keyring_file -- scripts/common.sh@366 -- # ver2[v]=2 00:35:45.380 10:53:26 keyring_file -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:45.380 10:53:26 keyring_file -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:45.380 10:53:26 keyring_file -- scripts/common.sh@368 -- # return 0 00:35:45.380 10:53:26 keyring_file -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:45.380 10:53:26 keyring_file -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:45.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.380 --rc genhtml_branch_coverage=1 00:35:45.380 --rc genhtml_function_coverage=1 00:35:45.380 --rc genhtml_legend=1 00:35:45.380 --rc geninfo_all_blocks=1 00:35:45.380 --rc geninfo_unexecuted_blocks=1 00:35:45.380 00:35:45.380 ' 00:35:45.380 10:53:26 keyring_file -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:45.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.380 --rc genhtml_branch_coverage=1 00:35:45.380 --rc genhtml_function_coverage=1 00:35:45.380 --rc genhtml_legend=1 00:35:45.380 --rc geninfo_all_blocks=1 00:35:45.380 --rc geninfo_unexecuted_blocks=1 00:35:45.380 00:35:45.380 ' 00:35:45.380 10:53:26 keyring_file -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:45.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.380 --rc genhtml_branch_coverage=1 00:35:45.380 --rc genhtml_function_coverage=1 00:35:45.380 --rc genhtml_legend=1 00:35:45.380 --rc geninfo_all_blocks=1 00:35:45.380 --rc geninfo_unexecuted_blocks=1 00:35:45.380 00:35:45.380 ' 00:35:45.380 10:53:26 keyring_file -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:45.380 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:45.380 --rc genhtml_branch_coverage=1 00:35:45.380 --rc genhtml_function_coverage=1 00:35:45.380 --rc genhtml_legend=1 00:35:45.380 --rc geninfo_all_blocks=1 00:35:45.380 --rc geninfo_unexecuted_blocks=1 00:35:45.380 00:35:45.380 ' 00:35:45.380 10:53:26 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:45.380 10:53:26 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:45.380 10:53:26 keyring_file -- scripts/common.sh@15 -- # shopt -s extglob 00:35:45.380 10:53:26 keyring_file -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:45.380 10:53:26 keyring_file -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:45.380 10:53:26 keyring_file -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:45.380 10:53:26 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.380 10:53:26 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.380 10:53:26 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.380 10:53:26 keyring_file -- paths/export.sh@5 -- # export PATH 00:35:45.380 10:53:26 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:45.380 10:53:26 keyring_file -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:45.380 10:53:26 keyring_file -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:45.380 10:53:26 keyring_file -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@50 -- # : 0 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:35:45.380 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:45.380 10:53:26 keyring_file -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:45.380 10:53:26 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:45.380 10:53:26 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:45.380 10:53:26 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:45.381 10:53:26 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:35:45.381 10:53:26 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:35:45.381 10:53:26 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:35:45.381 10:53:26 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:45.381 10:53:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:45.381 10:53:26 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:45.381 10:53:26 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:45.381 10:53:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:45.381 10:53:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:45.381 10:53:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vENK3DyMdv 00:35:45.381 10:53:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:45.381 10:53:26 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:45.381 10:53:26 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:35:45.381 10:53:26 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:35:45.381 10:53:26 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:35:45.381 10:53:26 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:35:45.381 10:53:26 keyring_file -- nvmf/common.sh@507 -- # python - 00:35:45.381 10:53:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vENK3DyMdv 00:35:45.381 10:53:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vENK3DyMdv 00:35:45.381 10:53:26 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.vENK3DyMdv 00:35:45.381 10:53:26 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:35:45.381 10:53:26 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:45.381 10:53:26 keyring_file -- keyring/common.sh@17 -- # name=key1 00:35:45.381 10:53:26 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:45.381 10:53:26 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:45.640 10:53:26 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:45.640 10:53:26 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.SDoaZaCYaQ 00:35:45.640 10:53:26 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:45.640 10:53:26 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:45.640 10:53:26 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:35:45.640 10:53:26 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:35:45.640 10:53:26 keyring_file -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:35:45.640 10:53:26 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:35:45.640 10:53:26 keyring_file -- nvmf/common.sh@507 -- # python - 00:35:45.640 10:53:26 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.SDoaZaCYaQ 00:35:45.640 10:53:26 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.SDoaZaCYaQ 00:35:45.640 10:53:26 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.SDoaZaCYaQ 00:35:45.640 10:53:26 keyring_file -- keyring/file.sh@30 -- # tgtpid=3505633 00:35:45.640 10:53:26 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:45.640 10:53:26 keyring_file -- keyring/file.sh@32 -- # waitforlisten 3505633 00:35:45.640 10:53:26 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3505633 ']' 00:35:45.640 10:53:26 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:45.640 10:53:26 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:45.640 10:53:26 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:45.640 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:45.640 10:53:26 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:45.640 10:53:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:45.640 [2024-11-20 10:53:26.205129] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:35:45.640 [2024-11-20 10:53:26.205182] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3505633 ] 00:35:45.640 [2024-11-20 10:53:26.279885] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:45.640 [2024-11-20 10:53:26.321632] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:45.899 10:53:26 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:45.899 [2024-11-20 10:53:26.533405] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:45.899 null0 00:35:45.899 [2024-11-20 10:53:26.565451] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:45.899 [2024-11-20 10:53:26.565784] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:45.899 10:53:26 keyring_file -- keyring/file.sh@44 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@655 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:45.899 [2024-11-20 10:53:26.597530] nvmf_rpc.c: 762:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:35:45.899 request: 00:35:45.899 { 00:35:45.899 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:35:45.899 "secure_channel": false, 00:35:45.899 "listen_address": { 00:35:45.899 "trtype": "tcp", 00:35:45.899 "traddr": "127.0.0.1", 00:35:45.899 "trsvcid": "4420" 00:35:45.899 }, 00:35:45.899 "method": "nvmf_subsystem_add_listener", 00:35:45.899 "req_id": 1 00:35:45.899 } 00:35:45.899 Got JSON-RPC error response 00:35:45.899 response: 00:35:45.899 { 00:35:45.899 "code": -32602, 00:35:45.899 "message": "Invalid parameters" 00:35:45.899 } 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:45.899 10:53:26 keyring_file -- keyring/file.sh@47 -- # bperfpid=3505640 00:35:45.899 10:53:26 keyring_file -- keyring/file.sh@49 -- # waitforlisten 3505640 /var/tmp/bperf.sock 00:35:45.899 10:53:26 keyring_file -- keyring/file.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3505640 ']' 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:45.899 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:45.899 10:53:26 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:46.158 [2024-11-20 10:53:26.653940] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:35:46.158 [2024-11-20 10:53:26.653983] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3505640 ] 00:35:46.158 [2024-11-20 10:53:26.729289] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:46.158 [2024-11-20 10:53:26.771750] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:46.158 10:53:26 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:46.158 10:53:26 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:46.158 10:53:26 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vENK3DyMdv 00:35:46.158 10:53:26 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vENK3DyMdv 00:35:46.417 10:53:27 keyring_file -- keyring/file.sh@51 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SDoaZaCYaQ 00:35:46.417 10:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SDoaZaCYaQ 00:35:46.676 10:53:27 keyring_file -- keyring/file.sh@52 -- # get_key key0 00:35:46.676 10:53:27 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:35:46.676 10:53:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:46.676 10:53:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:46.676 10:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:46.934 10:53:27 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.vENK3DyMdv == \/\t\m\p\/\t\m\p\.\v\E\N\K\3\D\y\M\d\v ]] 00:35:46.934 10:53:27 keyring_file -- keyring/file.sh@53 -- # get_key key1 00:35:46.934 10:53:27 keyring_file -- keyring/file.sh@53 -- # jq -r .path 00:35:46.934 10:53:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:46.934 10:53:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:46.934 10:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:46.934 10:53:27 keyring_file -- keyring/file.sh@53 -- # [[ /tmp/tmp.SDoaZaCYaQ == \/\t\m\p\/\t\m\p\.\S\D\o\a\Z\a\C\Y\a\Q ]] 00:35:46.934 10:53:27 keyring_file -- keyring/file.sh@54 -- # get_refcnt key0 00:35:46.934 10:53:27 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:46.934 10:53:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:46.934 10:53:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:46.934 10:53:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:46.934 10:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.193 10:53:27 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:35:47.193 10:53:27 keyring_file -- keyring/file.sh@55 -- # get_refcnt key1 00:35:47.193 10:53:27 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:47.193 10:53:27 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:47.193 10:53:27 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.193 10:53:27 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:47.193 10:53:27 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.452 10:53:28 keyring_file -- keyring/file.sh@55 -- # (( 1 == 1 )) 00:35:47.452 10:53:28 keyring_file -- keyring/file.sh@58 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:47.452 10:53:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:47.710 [2024-11-20 10:53:28.185971] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:47.710 nvme0n1 00:35:47.710 10:53:28 keyring_file -- keyring/file.sh@60 -- # get_refcnt key0 00:35:47.710 10:53:28 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:47.710 10:53:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:47.710 10:53:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.710 10:53:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:47.710 10:53:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.969 10:53:28 keyring_file -- keyring/file.sh@60 -- # (( 2 == 2 )) 00:35:47.969 10:53:28 keyring_file -- keyring/file.sh@61 -- # get_refcnt key1 00:35:47.969 10:53:28 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:47.969 10:53:28 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:47.969 10:53:28 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:47.969 10:53:28 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:47.969 10:53:28 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:47.969 10:53:28 keyring_file -- keyring/file.sh@61 -- # (( 1 == 1 )) 00:35:47.969 10:53:28 keyring_file -- keyring/file.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:48.228 Running I/O for 1 seconds... 00:35:49.163 19473.00 IOPS, 76.07 MiB/s 00:35:49.163 Latency(us) 00:35:49.163 [2024-11-20T09:53:29.894Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:49.163 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:35:49.163 nvme0n1 : 1.00 19522.37 76.26 0.00 0.00 6545.20 2527.82 10173.68 00:35:49.163 [2024-11-20T09:53:29.894Z] =================================================================================================================== 00:35:49.163 [2024-11-20T09:53:29.894Z] Total : 19522.37 76.26 0.00 0.00 6545.20 2527.82 10173.68 00:35:49.163 { 00:35:49.163 "results": [ 00:35:49.163 { 00:35:49.163 "job": "nvme0n1", 00:35:49.163 "core_mask": "0x2", 00:35:49.163 "workload": "randrw", 00:35:49.163 "percentage": 50, 00:35:49.163 "status": "finished", 00:35:49.163 "queue_depth": 128, 00:35:49.163 "io_size": 4096, 00:35:49.163 "runtime": 1.004079, 00:35:49.163 "iops": 19522.368259868, 00:35:49.163 "mibps": 76.25925101510937, 00:35:49.163 "io_failed": 0, 00:35:49.163 "io_timeout": 0, 00:35:49.163 "avg_latency_us": 6545.195499390246, 00:35:49.163 "min_latency_us": 2527.8171428571427, 00:35:49.163 "max_latency_us": 10173.683809523809 00:35:49.163 } 00:35:49.163 ], 00:35:49.163 "core_count": 1 00:35:49.163 } 00:35:49.163 10:53:29 keyring_file -- keyring/file.sh@65 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:49.163 10:53:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:49.422 10:53:29 keyring_file -- keyring/file.sh@66 -- # get_refcnt key0 00:35:49.422 10:53:29 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:49.422 10:53:29 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:49.422 10:53:29 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:49.422 10:53:29 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:49.422 10:53:29 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.681 10:53:30 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:35:49.681 10:53:30 keyring_file -- keyring/file.sh@67 -- # get_refcnt key1 00:35:49.681 10:53:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:49.681 10:53:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:49.681 10:53:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:49.681 10:53:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.681 10:53:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:49.681 10:53:30 keyring_file -- keyring/file.sh@67 -- # (( 1 == 1 )) 00:35:49.681 10:53:30 keyring_file -- keyring/file.sh@70 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:49.681 10:53:30 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:49.681 10:53:30 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:49.681 10:53:30 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:49.681 10:53:30 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:49.681 10:53:30 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:49.681 10:53:30 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:49.681 10:53:30 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:49.681 10:53:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:35:49.940 [2024-11-20 10:53:30.528545] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:35:49.940 [2024-11-20 10:53:30.529267] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c631f0 (107): Transport endpoint is not connected 00:35:49.940 [2024-11-20 10:53:30.530261] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1c631f0 (9): Bad file descriptor 00:35:49.940 [2024-11-20 10:53:30.531262] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:35:49.940 [2024-11-20 10:53:30.531272] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:35:49.940 [2024-11-20 10:53:30.531280] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:35:49.940 [2024-11-20 10:53:30.531289] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:35:49.940 request: 00:35:49.940 { 00:35:49.940 "name": "nvme0", 00:35:49.940 "trtype": "tcp", 00:35:49.940 "traddr": "127.0.0.1", 00:35:49.940 "adrfam": "ipv4", 00:35:49.940 "trsvcid": "4420", 00:35:49.940 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:49.940 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:49.940 "prchk_reftag": false, 00:35:49.940 "prchk_guard": false, 00:35:49.940 "hdgst": false, 00:35:49.940 "ddgst": false, 00:35:49.940 "psk": "key1", 00:35:49.940 "allow_unrecognized_csi": false, 00:35:49.940 "method": "bdev_nvme_attach_controller", 00:35:49.940 "req_id": 1 00:35:49.940 } 00:35:49.940 Got JSON-RPC error response 00:35:49.940 response: 00:35:49.940 { 00:35:49.940 "code": -5, 00:35:49.940 "message": "Input/output error" 00:35:49.940 } 00:35:49.940 10:53:30 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:49.940 10:53:30 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:49.940 10:53:30 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:49.940 10:53:30 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:49.940 10:53:30 keyring_file -- keyring/file.sh@72 -- # get_refcnt key0 00:35:49.940 10:53:30 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:49.940 10:53:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:49.940 10:53:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:49.940 10:53:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:49.940 10:53:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:50.198 10:53:30 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:35:50.198 10:53:30 keyring_file -- keyring/file.sh@73 -- # get_refcnt key1 00:35:50.198 10:53:30 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:50.198 10:53:30 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:50.198 10:53:30 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:50.198 10:53:30 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:50.198 10:53:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.456 10:53:30 keyring_file -- keyring/file.sh@73 -- # (( 1 == 1 )) 00:35:50.456 10:53:30 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key0 00:35:50.456 10:53:30 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:50.456 10:53:31 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_file_remove_key key1 00:35:50.456 10:53:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:35:50.714 10:53:31 keyring_file -- keyring/file.sh@78 -- # bperf_cmd keyring_get_keys 00:35:50.714 10:53:31 keyring_file -- keyring/file.sh@78 -- # jq length 00:35:50.714 10:53:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:50.973 10:53:31 keyring_file -- keyring/file.sh@78 -- # (( 0 == 0 )) 00:35:50.973 10:53:31 keyring_file -- keyring/file.sh@81 -- # chmod 0660 /tmp/tmp.vENK3DyMdv 00:35:50.973 10:53:31 keyring_file -- keyring/file.sh@82 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.vENK3DyMdv 00:35:50.973 10:53:31 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:50.973 10:53:31 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.vENK3DyMdv 00:35:50.973 10:53:31 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:50.973 10:53:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.973 10:53:31 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:50.973 10:53:31 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:50.973 10:53:31 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vENK3DyMdv 00:35:50.973 10:53:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vENK3DyMdv 00:35:50.973 [2024-11-20 10:53:31.700378] keyring.c: 36:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.vENK3DyMdv': 0100660 00:35:50.973 [2024-11-20 10:53:31.700404] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:35:51.231 request: 00:35:51.231 { 00:35:51.231 "name": "key0", 00:35:51.231 "path": "/tmp/tmp.vENK3DyMdv", 00:35:51.231 "method": "keyring_file_add_key", 00:35:51.231 "req_id": 1 00:35:51.231 } 00:35:51.231 Got JSON-RPC error response 00:35:51.231 response: 00:35:51.231 { 00:35:51.231 "code": -1, 00:35:51.231 "message": "Operation not permitted" 00:35:51.231 } 00:35:51.231 10:53:31 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:51.231 10:53:31 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:51.231 10:53:31 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:51.231 10:53:31 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:51.231 10:53:31 keyring_file -- keyring/file.sh@85 -- # chmod 0600 /tmp/tmp.vENK3DyMdv 00:35:51.231 10:53:31 keyring_file -- keyring/file.sh@86 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.vENK3DyMdv 00:35:51.231 10:53:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.vENK3DyMdv 00:35:51.231 10:53:31 keyring_file -- keyring/file.sh@87 -- # rm -f /tmp/tmp.vENK3DyMdv 00:35:51.231 10:53:31 keyring_file -- keyring/file.sh@89 -- # get_refcnt key0 00:35:51.231 10:53:31 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:51.231 10:53:31 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:51.231 10:53:31 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:51.231 10:53:31 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:51.231 10:53:31 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:51.489 10:53:32 keyring_file -- keyring/file.sh@89 -- # (( 1 == 1 )) 00:35:51.489 10:53:32 keyring_file -- keyring/file.sh@91 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:51.489 10:53:32 keyring_file -- common/autotest_common.sh@652 -- # local es=0 00:35:51.489 10:53:32 keyring_file -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:51.489 10:53:32 keyring_file -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:35:51.489 10:53:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:51.489 10:53:32 keyring_file -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:35:51.489 10:53:32 keyring_file -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:35:51.489 10:53:32 keyring_file -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:51.489 10:53:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:51.748 [2024-11-20 10:53:32.273896] keyring.c: 31:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.vENK3DyMdv': No such file or directory 00:35:51.748 [2024-11-20 10:53:32.273921] nvme_tcp.c:2498:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:35:51.748 [2024-11-20 10:53:32.273937] nvme.c: 682:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:35:51.748 [2024-11-20 10:53:32.273959] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, No such device 00:35:51.748 [2024-11-20 10:53:32.273967] nvme.c: 831:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:35:51.748 [2024-11-20 10:53:32.273974] bdev_nvme.c:6763:spdk_bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:35:51.748 request: 00:35:51.748 { 00:35:51.748 "name": "nvme0", 00:35:51.748 "trtype": "tcp", 00:35:51.748 "traddr": "127.0.0.1", 00:35:51.748 "adrfam": "ipv4", 00:35:51.748 "trsvcid": "4420", 00:35:51.748 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:51.748 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:51.748 "prchk_reftag": false, 00:35:51.748 "prchk_guard": false, 00:35:51.748 "hdgst": false, 00:35:51.748 "ddgst": false, 00:35:51.748 "psk": "key0", 00:35:51.748 "allow_unrecognized_csi": false, 00:35:51.748 "method": "bdev_nvme_attach_controller", 00:35:51.748 "req_id": 1 00:35:51.748 } 00:35:51.748 Got JSON-RPC error response 00:35:51.748 response: 00:35:51.748 { 00:35:51.748 "code": -19, 00:35:51.748 "message": "No such device" 00:35:51.748 } 00:35:51.748 10:53:32 keyring_file -- common/autotest_common.sh@655 -- # es=1 00:35:51.748 10:53:32 keyring_file -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:35:51.748 10:53:32 keyring_file -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:35:51.748 10:53:32 keyring_file -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:35:51.748 10:53:32 keyring_file -- keyring/file.sh@93 -- # bperf_cmd keyring_file_remove_key key0 00:35:51.748 10:53:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:52.007 10:53:32 keyring_file -- keyring/file.sh@96 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:35:52.007 10:53:32 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:35:52.007 10:53:32 keyring_file -- keyring/common.sh@17 -- # name=key0 00:35:52.007 10:53:32 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:52.007 10:53:32 keyring_file -- keyring/common.sh@17 -- # digest=0 00:35:52.007 10:53:32 keyring_file -- keyring/common.sh@18 -- # mktemp 00:35:52.007 10:53:32 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.z4NOOcGZMl 00:35:52.007 10:53:32 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:52.007 10:53:32 keyring_file -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:52.007 10:53:32 keyring_file -- nvmf/common.sh@504 -- # local prefix key digest 00:35:52.007 10:53:32 keyring_file -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:35:52.007 10:53:32 keyring_file -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:35:52.007 10:53:32 keyring_file -- nvmf/common.sh@506 -- # digest=0 00:35:52.007 10:53:32 keyring_file -- nvmf/common.sh@507 -- # python - 00:35:52.007 10:53:32 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.z4NOOcGZMl 00:35:52.007 10:53:32 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.z4NOOcGZMl 00:35:52.007 10:53:32 keyring_file -- keyring/file.sh@96 -- # key0path=/tmp/tmp.z4NOOcGZMl 00:35:52.007 10:53:32 keyring_file -- keyring/file.sh@97 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.z4NOOcGZMl 00:35:52.007 10:53:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.z4NOOcGZMl 00:35:52.265 10:53:32 keyring_file -- keyring/file.sh@98 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:52.266 10:53:32 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:52.266 nvme0n1 00:35:52.524 10:53:33 keyring_file -- keyring/file.sh@100 -- # get_refcnt key0 00:35:52.524 10:53:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:52.524 10:53:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:52.524 10:53:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:52.524 10:53:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:52.524 10:53:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:52.524 10:53:33 keyring_file -- keyring/file.sh@100 -- # (( 2 == 2 )) 00:35:52.524 10:53:33 keyring_file -- keyring/file.sh@101 -- # bperf_cmd keyring_file_remove_key key0 00:35:52.524 10:53:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:35:52.783 10:53:33 keyring_file -- keyring/file.sh@102 -- # get_key key0 00:35:52.783 10:53:33 keyring_file -- keyring/file.sh@102 -- # jq -r .removed 00:35:52.783 10:53:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:52.783 10:53:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:52.783 10:53:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:53.041 10:53:33 keyring_file -- keyring/file.sh@102 -- # [[ true == \t\r\u\e ]] 00:35:53.041 10:53:33 keyring_file -- keyring/file.sh@103 -- # get_refcnt key0 00:35:53.041 10:53:33 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:53.041 10:53:33 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:53.041 10:53:33 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:53.041 10:53:33 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:53.041 10:53:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:53.299 10:53:33 keyring_file -- keyring/file.sh@103 -- # (( 1 == 1 )) 00:35:53.299 10:53:33 keyring_file -- keyring/file.sh@104 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:35:53.299 10:53:33 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:35:53.299 10:53:34 keyring_file -- keyring/file.sh@105 -- # bperf_cmd keyring_get_keys 00:35:53.299 10:53:34 keyring_file -- keyring/file.sh@105 -- # jq length 00:35:53.299 10:53:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:53.557 10:53:34 keyring_file -- keyring/file.sh@105 -- # (( 0 == 0 )) 00:35:53.557 10:53:34 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.z4NOOcGZMl 00:35:53.557 10:53:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.z4NOOcGZMl 00:35:53.816 10:53:34 keyring_file -- keyring/file.sh@109 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.SDoaZaCYaQ 00:35:53.816 10:53:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.SDoaZaCYaQ 00:35:54.074 10:53:34 keyring_file -- keyring/file.sh@110 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:54.074 10:53:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:35:54.333 nvme0n1 00:35:54.333 10:53:34 keyring_file -- keyring/file.sh@113 -- # bperf_cmd save_config 00:35:54.333 10:53:34 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:35:54.593 10:53:35 keyring_file -- keyring/file.sh@113 -- # config='{ 00:35:54.593 "subsystems": [ 00:35:54.593 { 00:35:54.593 "subsystem": "keyring", 00:35:54.593 "config": [ 00:35:54.593 { 00:35:54.593 "method": "keyring_file_add_key", 00:35:54.593 "params": { 00:35:54.593 "name": "key0", 00:35:54.593 "path": "/tmp/tmp.z4NOOcGZMl" 00:35:54.593 } 00:35:54.593 }, 00:35:54.593 { 00:35:54.593 "method": "keyring_file_add_key", 00:35:54.593 "params": { 00:35:54.593 "name": "key1", 00:35:54.593 "path": "/tmp/tmp.SDoaZaCYaQ" 00:35:54.593 } 00:35:54.593 } 00:35:54.593 ] 00:35:54.593 }, 00:35:54.593 { 00:35:54.593 "subsystem": "iobuf", 00:35:54.593 "config": [ 00:35:54.593 { 00:35:54.593 "method": "iobuf_set_options", 00:35:54.594 "params": { 00:35:54.594 "small_pool_count": 8192, 00:35:54.594 "large_pool_count": 1024, 00:35:54.594 "small_bufsize": 8192, 00:35:54.594 "large_bufsize": 135168, 00:35:54.594 "enable_numa": false 00:35:54.594 } 00:35:54.594 } 00:35:54.594 ] 00:35:54.594 }, 00:35:54.594 { 00:35:54.594 "subsystem": "sock", 00:35:54.594 "config": [ 00:35:54.594 { 00:35:54.594 "method": "sock_set_default_impl", 00:35:54.594 "params": { 00:35:54.594 "impl_name": "posix" 00:35:54.594 } 00:35:54.594 }, 00:35:54.594 { 00:35:54.594 "method": "sock_impl_set_options", 00:35:54.594 "params": { 00:35:54.594 "impl_name": "ssl", 00:35:54.594 "recv_buf_size": 4096, 00:35:54.594 "send_buf_size": 4096, 00:35:54.594 "enable_recv_pipe": true, 00:35:54.594 "enable_quickack": false, 00:35:54.594 "enable_placement_id": 0, 00:35:54.594 "enable_zerocopy_send_server": true, 00:35:54.594 "enable_zerocopy_send_client": false, 00:35:54.594 "zerocopy_threshold": 0, 00:35:54.594 "tls_version": 0, 00:35:54.594 "enable_ktls": false 00:35:54.594 } 00:35:54.594 }, 00:35:54.594 { 00:35:54.594 "method": "sock_impl_set_options", 00:35:54.594 "params": { 00:35:54.594 "impl_name": "posix", 00:35:54.594 "recv_buf_size": 2097152, 00:35:54.594 "send_buf_size": 2097152, 00:35:54.594 "enable_recv_pipe": true, 00:35:54.594 "enable_quickack": false, 00:35:54.594 "enable_placement_id": 0, 00:35:54.594 "enable_zerocopy_send_server": true, 00:35:54.594 "enable_zerocopy_send_client": false, 00:35:54.594 "zerocopy_threshold": 0, 00:35:54.594 "tls_version": 0, 00:35:54.594 "enable_ktls": false 00:35:54.594 } 00:35:54.594 } 00:35:54.594 ] 00:35:54.594 }, 00:35:54.594 { 00:35:54.594 "subsystem": "vmd", 00:35:54.594 "config": [] 00:35:54.594 }, 00:35:54.594 { 00:35:54.594 "subsystem": "accel", 00:35:54.594 "config": [ 00:35:54.594 { 00:35:54.594 "method": "accel_set_options", 00:35:54.594 "params": { 00:35:54.594 "small_cache_size": 128, 00:35:54.594 "large_cache_size": 16, 00:35:54.594 "task_count": 2048, 00:35:54.594 "sequence_count": 2048, 00:35:54.594 "buf_count": 2048 00:35:54.594 } 00:35:54.594 } 00:35:54.594 ] 00:35:54.594 }, 00:35:54.594 { 00:35:54.594 "subsystem": "bdev", 00:35:54.594 "config": [ 00:35:54.594 { 00:35:54.594 "method": "bdev_set_options", 00:35:54.594 "params": { 00:35:54.594 "bdev_io_pool_size": 65535, 00:35:54.594 "bdev_io_cache_size": 256, 00:35:54.594 "bdev_auto_examine": true, 00:35:54.594 "iobuf_small_cache_size": 128, 00:35:54.594 "iobuf_large_cache_size": 16 00:35:54.594 } 00:35:54.594 }, 00:35:54.594 { 00:35:54.594 "method": "bdev_raid_set_options", 00:35:54.594 "params": { 00:35:54.594 "process_window_size_kb": 1024, 00:35:54.594 "process_max_bandwidth_mb_sec": 0 00:35:54.594 } 00:35:54.594 }, 00:35:54.594 { 00:35:54.594 "method": "bdev_iscsi_set_options", 00:35:54.594 "params": { 00:35:54.594 "timeout_sec": 30 00:35:54.594 } 00:35:54.594 }, 00:35:54.594 { 00:35:54.594 "method": "bdev_nvme_set_options", 00:35:54.594 "params": { 00:35:54.594 "action_on_timeout": "none", 00:35:54.594 "timeout_us": 0, 00:35:54.594 "timeout_admin_us": 0, 00:35:54.594 "keep_alive_timeout_ms": 10000, 00:35:54.594 "arbitration_burst": 0, 00:35:54.594 "low_priority_weight": 0, 00:35:54.594 "medium_priority_weight": 0, 00:35:54.594 "high_priority_weight": 0, 00:35:54.594 "nvme_adminq_poll_period_us": 10000, 00:35:54.594 "nvme_ioq_poll_period_us": 0, 00:35:54.594 "io_queue_requests": 512, 00:35:54.594 "delay_cmd_submit": true, 00:35:54.594 "transport_retry_count": 4, 00:35:54.594 "bdev_retry_count": 3, 00:35:54.594 "transport_ack_timeout": 0, 00:35:54.594 "ctrlr_loss_timeout_sec": 0, 00:35:54.594 "reconnect_delay_sec": 0, 00:35:54.594 "fast_io_fail_timeout_sec": 0, 00:35:54.594 "disable_auto_failback": false, 00:35:54.594 "generate_uuids": false, 00:35:54.594 "transport_tos": 0, 00:35:54.594 "nvme_error_stat": false, 00:35:54.594 "rdma_srq_size": 0, 00:35:54.594 "io_path_stat": false, 00:35:54.594 "allow_accel_sequence": false, 00:35:54.594 "rdma_max_cq_size": 0, 00:35:54.594 "rdma_cm_event_timeout_ms": 0, 00:35:54.594 "dhchap_digests": [ 00:35:54.594 "sha256", 00:35:54.594 "sha384", 00:35:54.594 "sha512" 00:35:54.594 ], 00:35:54.594 "dhchap_dhgroups": [ 00:35:54.594 "null", 00:35:54.594 "ffdhe2048", 00:35:54.594 "ffdhe3072", 00:35:54.594 "ffdhe4096", 00:35:54.594 "ffdhe6144", 00:35:54.594 "ffdhe8192" 00:35:54.594 ] 00:35:54.594 } 00:35:54.594 }, 00:35:54.594 { 00:35:54.594 "method": "bdev_nvme_attach_controller", 00:35:54.594 "params": { 00:35:54.594 "name": "nvme0", 00:35:54.594 "trtype": "TCP", 00:35:54.594 "adrfam": "IPv4", 00:35:54.594 "traddr": "127.0.0.1", 00:35:54.594 "trsvcid": "4420", 00:35:54.594 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:54.594 "prchk_reftag": false, 00:35:54.594 "prchk_guard": false, 00:35:54.594 "ctrlr_loss_timeout_sec": 0, 00:35:54.594 "reconnect_delay_sec": 0, 00:35:54.594 "fast_io_fail_timeout_sec": 0, 00:35:54.594 "psk": "key0", 00:35:54.594 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:54.594 "hdgst": false, 00:35:54.594 "ddgst": false, 00:35:54.594 "multipath": "multipath" 00:35:54.594 } 00:35:54.594 }, 00:35:54.594 { 00:35:54.594 "method": "bdev_nvme_set_hotplug", 00:35:54.594 "params": { 00:35:54.594 "period_us": 100000, 00:35:54.594 "enable": false 00:35:54.594 } 00:35:54.594 }, 00:35:54.594 { 00:35:54.594 "method": "bdev_wait_for_examine" 00:35:54.594 } 00:35:54.594 ] 00:35:54.594 }, 00:35:54.594 { 00:35:54.594 "subsystem": "nbd", 00:35:54.594 "config": [] 00:35:54.594 } 00:35:54.594 ] 00:35:54.594 }' 00:35:54.594 10:53:35 keyring_file -- keyring/file.sh@115 -- # killprocess 3505640 00:35:54.594 10:53:35 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3505640 ']' 00:35:54.594 10:53:35 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3505640 00:35:54.594 10:53:35 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:54.594 10:53:35 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:54.594 10:53:35 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3505640 00:35:54.594 10:53:35 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:54.594 10:53:35 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:54.594 10:53:35 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3505640' 00:35:54.594 killing process with pid 3505640 00:35:54.594 10:53:35 keyring_file -- common/autotest_common.sh@973 -- # kill 3505640 00:35:54.594 Received shutdown signal, test time was about 1.000000 seconds 00:35:54.594 00:35:54.594 Latency(us) 00:35:54.594 [2024-11-20T09:53:35.325Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:54.594 [2024-11-20T09:53:35.325Z] =================================================================================================================== 00:35:54.595 [2024-11-20T09:53:35.326Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:35:54.595 10:53:35 keyring_file -- common/autotest_common.sh@978 -- # wait 3505640 00:35:54.595 10:53:35 keyring_file -- keyring/file.sh@118 -- # bperfpid=3507172 00:35:54.595 10:53:35 keyring_file -- keyring/file.sh@120 -- # waitforlisten 3507172 /var/tmp/bperf.sock 00:35:54.595 10:53:35 keyring_file -- common/autotest_common.sh@835 -- # '[' -z 3507172 ']' 00:35:54.595 10:53:35 keyring_file -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:54.595 10:53:35 keyring_file -- keyring/file.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:35:54.595 10:53:35 keyring_file -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:54.595 10:53:35 keyring_file -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:54.595 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:54.595 10:53:35 keyring_file -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:54.595 10:53:35 keyring_file -- keyring/file.sh@116 -- # echo '{ 00:35:54.595 "subsystems": [ 00:35:54.595 { 00:35:54.595 "subsystem": "keyring", 00:35:54.595 "config": [ 00:35:54.595 { 00:35:54.595 "method": "keyring_file_add_key", 00:35:54.595 "params": { 00:35:54.595 "name": "key0", 00:35:54.595 "path": "/tmp/tmp.z4NOOcGZMl" 00:35:54.595 } 00:35:54.595 }, 00:35:54.595 { 00:35:54.595 "method": "keyring_file_add_key", 00:35:54.595 "params": { 00:35:54.595 "name": "key1", 00:35:54.595 "path": "/tmp/tmp.SDoaZaCYaQ" 00:35:54.595 } 00:35:54.595 } 00:35:54.595 ] 00:35:54.595 }, 00:35:54.595 { 00:35:54.595 "subsystem": "iobuf", 00:35:54.595 "config": [ 00:35:54.595 { 00:35:54.595 "method": "iobuf_set_options", 00:35:54.595 "params": { 00:35:54.595 "small_pool_count": 8192, 00:35:54.595 "large_pool_count": 1024, 00:35:54.595 "small_bufsize": 8192, 00:35:54.595 "large_bufsize": 135168, 00:35:54.595 "enable_numa": false 00:35:54.595 } 00:35:54.595 } 00:35:54.595 ] 00:35:54.595 }, 00:35:54.595 { 00:35:54.595 "subsystem": "sock", 00:35:54.595 "config": [ 00:35:54.595 { 00:35:54.595 "method": "sock_set_default_impl", 00:35:54.595 "params": { 00:35:54.595 "impl_name": "posix" 00:35:54.595 } 00:35:54.595 }, 00:35:54.595 { 00:35:54.595 "method": "sock_impl_set_options", 00:35:54.595 "params": { 00:35:54.595 "impl_name": "ssl", 00:35:54.595 "recv_buf_size": 4096, 00:35:54.595 "send_buf_size": 4096, 00:35:54.595 "enable_recv_pipe": true, 00:35:54.595 "enable_quickack": false, 00:35:54.595 "enable_placement_id": 0, 00:35:54.595 "enable_zerocopy_send_server": true, 00:35:54.595 "enable_zerocopy_send_client": false, 00:35:54.595 "zerocopy_threshold": 0, 00:35:54.595 "tls_version": 0, 00:35:54.595 "enable_ktls": false 00:35:54.595 } 00:35:54.595 }, 00:35:54.595 { 00:35:54.595 "method": "sock_impl_set_options", 00:35:54.595 "params": { 00:35:54.595 "impl_name": "posix", 00:35:54.595 "recv_buf_size": 2097152, 00:35:54.595 "send_buf_size": 2097152, 00:35:54.595 "enable_recv_pipe": true, 00:35:54.595 "enable_quickack": false, 00:35:54.595 "enable_placement_id": 0, 00:35:54.595 "enable_zerocopy_send_server": true, 00:35:54.595 "enable_zerocopy_send_client": false, 00:35:54.595 "zerocopy_threshold": 0, 00:35:54.595 "tls_version": 0, 00:35:54.595 "enable_ktls": false 00:35:54.595 } 00:35:54.595 } 00:35:54.595 ] 00:35:54.595 }, 00:35:54.595 { 00:35:54.595 "subsystem": "vmd", 00:35:54.595 "config": [] 00:35:54.595 }, 00:35:54.595 { 00:35:54.595 "subsystem": "accel", 00:35:54.595 "config": [ 00:35:54.595 { 00:35:54.595 "method": "accel_set_options", 00:35:54.595 "params": { 00:35:54.595 "small_cache_size": 128, 00:35:54.595 "large_cache_size": 16, 00:35:54.595 "task_count": 2048, 00:35:54.595 "sequence_count": 2048, 00:35:54.595 "buf_count": 2048 00:35:54.595 } 00:35:54.595 } 00:35:54.595 ] 00:35:54.595 }, 00:35:54.595 { 00:35:54.595 "subsystem": "bdev", 00:35:54.595 "config": [ 00:35:54.595 { 00:35:54.595 "method": "bdev_set_options", 00:35:54.595 "params": { 00:35:54.595 "bdev_io_pool_size": 65535, 00:35:54.595 "bdev_io_cache_size": 256, 00:35:54.595 "bdev_auto_examine": true, 00:35:54.595 "iobuf_small_cache_size": 128, 00:35:54.595 "iobuf_large_cache_size": 16 00:35:54.595 } 00:35:54.595 }, 00:35:54.595 { 00:35:54.595 "method": "bdev_raid_set_options", 00:35:54.595 "params": { 00:35:54.595 "process_window_size_kb": 1024, 00:35:54.595 "process_max_bandwidth_mb_sec": 0 00:35:54.595 } 00:35:54.595 }, 00:35:54.595 { 00:35:54.595 "method": "bdev_iscsi_set_options", 00:35:54.595 "params": { 00:35:54.595 "timeout_sec": 30 00:35:54.595 } 00:35:54.595 }, 00:35:54.595 { 00:35:54.595 "method": "bdev_nvme_set_options", 00:35:54.595 "params": { 00:35:54.595 "action_on_timeout": "none", 00:35:54.595 "timeout_us": 0, 00:35:54.595 "timeout_admin_us": 0, 00:35:54.595 "keep_alive_timeout_ms": 10000, 00:35:54.595 "arbitration_burst": 0, 00:35:54.595 "low_priority_weight": 0, 00:35:54.595 "medium_priority_weight": 0, 00:35:54.595 "high_priority_weight": 0, 00:35:54.595 "nvme_adminq_poll_period_us": 10000, 00:35:54.595 "nvme_ioq_poll_period_us": 0, 00:35:54.595 "io_queue_requests": 512, 00:35:54.595 "delay_cmd_submit": true, 00:35:54.595 "transport_retry_count": 4, 00:35:54.595 "bdev_retry_count": 3, 00:35:54.595 "transport_ack_timeout": 0, 00:35:54.595 "ctrlr_loss_timeout_sec": 0, 00:35:54.595 "reconnect_delay_sec": 0, 00:35:54.595 "fast_io_fail_timeout_sec": 0, 00:35:54.595 "disable_auto_failback": false, 00:35:54.595 "generate_uuids": false, 00:35:54.595 "transport_tos": 0, 00:35:54.595 "nvme_error_stat": false, 00:35:54.595 "rdma_srq_size": 0, 00:35:54.595 "io_path_stat": false, 00:35:54.595 "allow_accel_sequence": false, 00:35:54.595 "rdma_max_cq_size": 0, 00:35:54.595 "rdma_cm_event_timeout_ms": 0, 00:35:54.595 "dhchap_digests": [ 00:35:54.595 "sha256", 00:35:54.595 "sha384", 00:35:54.595 "sha512" 00:35:54.595 ], 00:35:54.595 "dhchap_dhgroups": [ 00:35:54.595 "null", 00:35:54.595 "ffdhe2048", 00:35:54.595 "ffdhe3072", 00:35:54.595 "ffdhe4096", 00:35:54.595 "ffdhe6144", 00:35:54.595 "ffdhe8192" 00:35:54.595 ] 00:35:54.595 } 00:35:54.595 }, 00:35:54.595 { 00:35:54.595 "method": "bdev_nvme_attach_controller", 00:35:54.595 "params": { 00:35:54.595 "name": "nvme0", 00:35:54.595 "trtype": "TCP", 00:35:54.595 "adrfam": "IPv4", 00:35:54.595 "traddr": "127.0.0.1", 00:35:54.595 "trsvcid": "4420", 00:35:54.595 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:35:54.595 "prchk_reftag": false, 00:35:54.595 "prchk_guard": false, 00:35:54.595 "ctrlr_loss_timeout_sec": 0, 00:35:54.595 "reconnect_delay_sec": 0, 00:35:54.595 "fast_io_fail_timeout_sec": 0, 00:35:54.595 "psk": "key0", 00:35:54.595 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:35:54.595 "hdgst": false, 00:35:54.595 "ddgst": false, 00:35:54.595 "multipath": "multipath" 00:35:54.595 } 00:35:54.595 }, 00:35:54.595 { 00:35:54.595 "method": "bdev_nvme_set_hotplug", 00:35:54.595 "params": { 00:35:54.595 "period_us": 100000, 00:35:54.595 "enable": false 00:35:54.595 } 00:35:54.595 }, 00:35:54.595 { 00:35:54.595 "method": "bdev_wait_for_examine" 00:35:54.595 } 00:35:54.595 ] 00:35:54.595 }, 00:35:54.595 { 00:35:54.595 "subsystem": "nbd", 00:35:54.595 "config": [] 00:35:54.595 } 00:35:54.595 ] 00:35:54.595 }' 00:35:54.595 10:53:35 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:54.855 [2024-11-20 10:53:35.349963] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:35:54.855 [2024-11-20 10:53:35.350007] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3507172 ] 00:35:54.855 [2024-11-20 10:53:35.423218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:54.855 [2024-11-20 10:53:35.465317] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:55.114 [2024-11-20 10:53:35.626252] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:55.682 10:53:36 keyring_file -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:55.682 10:53:36 keyring_file -- common/autotest_common.sh@868 -- # return 0 00:35:55.682 10:53:36 keyring_file -- keyring/file.sh@121 -- # bperf_cmd keyring_get_keys 00:35:55.682 10:53:36 keyring_file -- keyring/file.sh@121 -- # jq length 00:35:55.682 10:53:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:55.682 10:53:36 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:35:55.682 10:53:36 keyring_file -- keyring/file.sh@122 -- # get_refcnt key0 00:35:55.682 10:53:36 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:35:55.682 10:53:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:55.682 10:53:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:55.682 10:53:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:35:55.682 10:53:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:55.941 10:53:36 keyring_file -- keyring/file.sh@122 -- # (( 2 == 2 )) 00:35:55.941 10:53:36 keyring_file -- keyring/file.sh@123 -- # get_refcnt key1 00:35:55.941 10:53:36 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:35:55.941 10:53:36 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:35:55.941 10:53:36 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:55.941 10:53:36 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:35:55.941 10:53:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:56.199 10:53:36 keyring_file -- keyring/file.sh@123 -- # (( 1 == 1 )) 00:35:56.199 10:53:36 keyring_file -- keyring/file.sh@124 -- # bperf_cmd bdev_nvme_get_controllers 00:35:56.199 10:53:36 keyring_file -- keyring/file.sh@124 -- # jq -r '.[].name' 00:35:56.199 10:53:36 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:35:56.458 10:53:36 keyring_file -- keyring/file.sh@124 -- # [[ nvme0 == nvme0 ]] 00:35:56.458 10:53:36 keyring_file -- keyring/file.sh@1 -- # cleanup 00:35:56.458 10:53:36 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.z4NOOcGZMl /tmp/tmp.SDoaZaCYaQ 00:35:56.458 10:53:36 keyring_file -- keyring/file.sh@20 -- # killprocess 3507172 00:35:56.458 10:53:36 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3507172 ']' 00:35:56.458 10:53:36 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3507172 00:35:56.458 10:53:36 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:56.458 10:53:36 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:56.458 10:53:37 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3507172 00:35:56.458 10:53:37 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:35:56.458 10:53:37 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:35:56.458 10:53:37 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3507172' 00:35:56.458 killing process with pid 3507172 00:35:56.458 10:53:37 keyring_file -- common/autotest_common.sh@973 -- # kill 3507172 00:35:56.458 Received shutdown signal, test time was about 1.000000 seconds 00:35:56.458 00:35:56.458 Latency(us) 00:35:56.458 [2024-11-20T09:53:37.189Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:35:56.458 [2024-11-20T09:53:37.189Z] =================================================================================================================== 00:35:56.458 [2024-11-20T09:53:37.189Z] Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:35:56.458 10:53:37 keyring_file -- common/autotest_common.sh@978 -- # wait 3507172 00:35:56.717 10:53:37 keyring_file -- keyring/file.sh@21 -- # killprocess 3505633 00:35:56.717 10:53:37 keyring_file -- common/autotest_common.sh@954 -- # '[' -z 3505633 ']' 00:35:56.717 10:53:37 keyring_file -- common/autotest_common.sh@958 -- # kill -0 3505633 00:35:56.717 10:53:37 keyring_file -- common/autotest_common.sh@959 -- # uname 00:35:56.717 10:53:37 keyring_file -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:56.717 10:53:37 keyring_file -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3505633 00:35:56.717 10:53:37 keyring_file -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:56.717 10:53:37 keyring_file -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:56.717 10:53:37 keyring_file -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3505633' 00:35:56.717 killing process with pid 3505633 00:35:56.717 10:53:37 keyring_file -- common/autotest_common.sh@973 -- # kill 3505633 00:35:56.717 10:53:37 keyring_file -- common/autotest_common.sh@978 -- # wait 3505633 00:35:56.976 00:35:56.976 real 0m11.704s 00:35:56.976 user 0m29.139s 00:35:56.976 sys 0m2.657s 00:35:56.976 10:53:37 keyring_file -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:56.976 10:53:37 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:35:56.976 ************************************ 00:35:56.976 END TEST keyring_file 00:35:56.976 ************************************ 00:35:56.976 10:53:37 -- spdk/autotest.sh@293 -- # [[ y == y ]] 00:35:56.976 10:53:37 -- spdk/autotest.sh@294 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:56.976 10:53:37 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:35:56.976 10:53:37 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:35:56.976 10:53:37 -- common/autotest_common.sh@10 -- # set +x 00:35:56.976 ************************************ 00:35:56.976 START TEST keyring_linux 00:35:56.976 ************************************ 00:35:56.976 10:53:37 keyring_linux -- common/autotest_common.sh@1129 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/keyctl-session-wrapper /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:35:56.976 Joined session keyring: 896678666 00:35:56.976 * Looking for test storage... 00:35:57.236 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:35:57.236 10:53:37 keyring_linux -- common/autotest_common.sh@1692 -- # [[ y == y ]] 00:35:57.236 10:53:37 keyring_linux -- common/autotest_common.sh@1693 -- # lcov --version 00:35:57.236 10:53:37 keyring_linux -- common/autotest_common.sh@1693 -- # awk '{print $NF}' 00:35:57.236 10:53:37 keyring_linux -- common/autotest_common.sh@1693 -- # lt 1.15 2 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@333 -- # local ver1 ver1_l 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@334 -- # local ver2 ver2_l 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@336 -- # IFS=.-: 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@336 -- # read -ra ver1 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@337 -- # IFS=.-: 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@337 -- # read -ra ver2 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@338 -- # local 'op=<' 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@340 -- # ver1_l=2 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@341 -- # ver2_l=1 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@344 -- # case "$op" in 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@345 -- # : 1 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@364 -- # (( v = 0 )) 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@365 -- # decimal 1 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@353 -- # local d=1 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@355 -- # echo 1 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@365 -- # ver1[v]=1 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@366 -- # decimal 2 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@353 -- # local d=2 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@355 -- # echo 2 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@366 -- # ver2[v]=2 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:35:57.236 10:53:37 keyring_linux -- scripts/common.sh@368 -- # return 0 00:35:57.236 10:53:37 keyring_linux -- common/autotest_common.sh@1694 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:35:57.236 10:53:37 keyring_linux -- common/autotest_common.sh@1706 -- # export 'LCOV_OPTS= 00:35:57.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.236 --rc genhtml_branch_coverage=1 00:35:57.236 --rc genhtml_function_coverage=1 00:35:57.236 --rc genhtml_legend=1 00:35:57.236 --rc geninfo_all_blocks=1 00:35:57.236 --rc geninfo_unexecuted_blocks=1 00:35:57.236 00:35:57.236 ' 00:35:57.236 10:53:37 keyring_linux -- common/autotest_common.sh@1706 -- # LCOV_OPTS=' 00:35:57.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.236 --rc genhtml_branch_coverage=1 00:35:57.236 --rc genhtml_function_coverage=1 00:35:57.236 --rc genhtml_legend=1 00:35:57.236 --rc geninfo_all_blocks=1 00:35:57.236 --rc geninfo_unexecuted_blocks=1 00:35:57.236 00:35:57.236 ' 00:35:57.236 10:53:37 keyring_linux -- common/autotest_common.sh@1707 -- # export 'LCOV=lcov 00:35:57.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.236 --rc genhtml_branch_coverage=1 00:35:57.236 --rc genhtml_function_coverage=1 00:35:57.236 --rc genhtml_legend=1 00:35:57.236 --rc geninfo_all_blocks=1 00:35:57.236 --rc geninfo_unexecuted_blocks=1 00:35:57.236 00:35:57.236 ' 00:35:57.236 10:53:37 keyring_linux -- common/autotest_common.sh@1707 -- # LCOV='lcov 00:35:57.236 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:35:57.236 --rc genhtml_branch_coverage=1 00:35:57.236 --rc genhtml_function_coverage=1 00:35:57.236 --rc genhtml_legend=1 00:35:57.236 --rc geninfo_all_blocks=1 00:35:57.236 --rc geninfo_unexecuted_blocks=1 00:35:57.236 00:35:57.236 ' 00:35:57.236 10:53:37 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:35:57.236 10:53:37 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:35:57.236 10:53:37 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:35:57.236 10:53:37 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:35:57.236 10:53:37 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:35:57.236 10:53:37 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:35:57.236 10:53:37 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:35:57.236 10:53:37 keyring_linux -- nvmf/common.sh@12 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@13 -- # NVMF_TRANSPORT_OPTS= 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@14 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@15 -- # nvme gen-hostnqn 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@15 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:00ad29c2-ccbd-e911-906e-0017a4403562 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@16 -- # NVME_HOSTID=00ad29c2-ccbd-e911-906e-0017a4403562 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@18 -- # NVME_CONNECT='nvme connect' 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@19 -- # NET_TYPE=phy 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@20 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@47 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:35:57.237 10:53:37 keyring_linux -- scripts/common.sh@15 -- # shopt -s extglob 00:35:57.237 10:53:37 keyring_linux -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:35:57.237 10:53:37 keyring_linux -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:35:57.237 10:53:37 keyring_linux -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:35:57.237 10:53:37 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.237 10:53:37 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.237 10:53:37 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.237 10:53:37 keyring_linux -- paths/export.sh@5 -- # export PATH 00:35:57.237 10:53:37 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@48 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/setup.sh 00:35:57.237 10:53:37 keyring_linux -- nvmf/setup.sh@6 -- # NVMF_BRIDGE=nvmf_br 00:35:57.237 10:53:37 keyring_linux -- nvmf/setup.sh@7 -- # NVMF_TARGET_NAMESPACE=nvmf_ns_spdk 00:35:57.237 10:53:37 keyring_linux -- nvmf/setup.sh@8 -- # NVMF_TARGET_NS_CMD=() 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@50 -- # : 0 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@51 -- # export NVMF_APP_SHM_ID 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@52 -- # build_nvmf_app_args 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@23 -- # '[' 0 -eq 1 ']' 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@27 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@31 -- # '[' '' -eq 1 ']' 00:35:57.237 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh: line 31: [: : integer expression expected 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@35 -- # '[' -n '' ']' 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@37 -- # '[' 0 -eq 1 ']' 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@54 -- # have_pci_nics=0 00:35:57.237 10:53:37 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:35:57.237 10:53:37 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:35:57.237 10:53:37 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:35:57.237 10:53:37 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:35:57.237 10:53:37 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:35:57.237 10:53:37 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:35:57.237 10:53:37 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:35:57.237 10:53:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:57.237 10:53:37 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:35:57.237 10:53:37 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:35:57.237 10:53:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:57.237 10:53:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:35:57.237 10:53:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@506 -- # key=00112233445566778899aabbccddeeff 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@507 -- # python - 00:35:57.237 10:53:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:35:57.237 10:53:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:35:57.237 /tmp/:spdk-test:key0 00:35:57.237 10:53:37 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:35:57.237 10:53:37 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:35:57.237 10:53:37 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:35:57.237 10:53:37 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:35:57.237 10:53:37 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:35:57.237 10:53:37 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:35:57.237 10:53:37 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@517 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@504 -- # local prefix key digest 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@506 -- # prefix=NVMeTLSkey-1 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@506 -- # key=112233445566778899aabbccddeeff00 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@506 -- # digest=0 00:35:57.237 10:53:37 keyring_linux -- nvmf/common.sh@507 -- # python - 00:35:57.237 10:53:37 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:35:57.237 10:53:37 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:35:57.237 /tmp/:spdk-test:key1 00:35:57.237 10:53:37 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=3507722 00:35:57.237 10:53:37 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 3507722 00:35:57.237 10:53:37 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:35:57.237 10:53:37 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3507722 ']' 00:35:57.237 10:53:37 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:57.237 10:53:37 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:57.237 10:53:37 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:57.237 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:57.237 10:53:37 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:57.237 10:53:37 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:57.237 [2024-11-20 10:53:37.958418] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:35:57.237 [2024-11-20 10:53:37.958467] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3507722 ] 00:35:57.495 [2024-11-20 10:53:38.033722] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:57.495 [2024-11-20 10:53:38.075700] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 0 00:35:57.754 10:53:38 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:57.754 10:53:38 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:57.754 10:53:38 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:35:57.754 10:53:38 keyring_linux -- common/autotest_common.sh@563 -- # xtrace_disable 00:35:57.754 10:53:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:57.754 [2024-11-20 10:53:38.294230] tcp.c: 738:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:35:57.754 null0 00:35:57.754 [2024-11-20 10:53:38.326279] tcp.c:1031:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:35:57.754 [2024-11-20 10:53:38.326621] tcp.c:1081:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:35:57.754 10:53:38 keyring_linux -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:35:57.754 10:53:38 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:35:57.754 46786457 00:35:57.754 10:53:38 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:35:57.754 262365739 00:35:57.754 10:53:38 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=3507735 00:35:57.754 10:53:38 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 3507735 /var/tmp/bperf.sock 00:35:57.754 10:53:38 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:35:57.754 10:53:38 keyring_linux -- common/autotest_common.sh@835 -- # '[' -z 3507735 ']' 00:35:57.754 10:53:38 keyring_linux -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/bperf.sock 00:35:57.754 10:53:38 keyring_linux -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:57.754 10:53:38 keyring_linux -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:35:57.754 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:35:57.754 10:53:38 keyring_linux -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:57.754 10:53:38 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:35:57.754 [2024-11-20 10:53:38.396152] Starting SPDK v25.01-pre git sha1 097badaeb / DPDK 24.03.0 initialization... 00:35:57.754 [2024-11-20 10:53:38.396195] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid3507735 ] 00:35:57.754 [2024-11-20 10:53:38.469179] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:58.013 [2024-11-20 10:53:38.512049] reactor.c:1005:reactor_run: *NOTICE*: Reactor started on core 1 00:35:58.013 10:53:38 keyring_linux -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:58.013 10:53:38 keyring_linux -- common/autotest_common.sh@868 -- # return 0 00:35:58.013 10:53:38 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:35:58.013 10:53:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:35:58.271 10:53:38 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:35:58.271 10:53:38 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:35:58.531 10:53:39 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:58.531 10:53:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:35:58.531 [2024-11-20 10:53:39.175211] bdev_nvme_rpc.c: 514:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:35:58.531 nvme0n1 00:35:58.531 10:53:39 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:35:58.531 10:53:39 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:35:58.531 10:53:39 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:35:58.531 10:53:39 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:35:58.531 10:53:39 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:35:58.531 10:53:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:58.790 10:53:39 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:35:58.790 10:53:39 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:35:58.790 10:53:39 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:35:58.790 10:53:39 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:35:58.790 10:53:39 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:35:58.790 10:53:39 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:35:58.790 10:53:39 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:35:59.048 10:53:39 keyring_linux -- keyring/linux.sh@25 -- # sn=46786457 00:35:59.048 10:53:39 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:35:59.048 10:53:39 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:35:59.048 10:53:39 keyring_linux -- keyring/linux.sh@26 -- # [[ 46786457 == \4\6\7\8\6\4\5\7 ]] 00:35:59.048 10:53:39 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 46786457 00:35:59.048 10:53:39 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:35:59.048 10:53:39 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:35:59.048 Running I/O for 1 seconds... 00:36:00.423 21011.00 IOPS, 82.07 MiB/s 00:36:00.423 Latency(us) 00:36:00.423 [2024-11-20T09:53:41.154Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.423 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:36:00.424 nvme0n1 : 1.01 21014.56 82.09 0.00 0.00 6071.21 4649.94 14730.00 00:36:00.424 [2024-11-20T09:53:41.155Z] =================================================================================================================== 00:36:00.424 [2024-11-20T09:53:41.155Z] Total : 21014.56 82.09 0.00 0.00 6071.21 4649.94 14730.00 00:36:00.424 { 00:36:00.424 "results": [ 00:36:00.424 { 00:36:00.424 "job": "nvme0n1", 00:36:00.424 "core_mask": "0x2", 00:36:00.424 "workload": "randread", 00:36:00.424 "status": "finished", 00:36:00.424 "queue_depth": 128, 00:36:00.424 "io_size": 4096, 00:36:00.424 "runtime": 1.005969, 00:36:00.424 "iops": 21014.56406708358, 00:36:00.424 "mibps": 82.08814088704523, 00:36:00.424 "io_failed": 0, 00:36:00.424 "io_timeout": 0, 00:36:00.424 "avg_latency_us": 6071.209040500969, 00:36:00.424 "min_latency_us": 4649.935238095238, 00:36:00.424 "max_latency_us": 14729.996190476191 00:36:00.424 } 00:36:00.424 ], 00:36:00.424 "core_count": 1 00:36:00.424 } 00:36:00.424 10:53:40 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:36:00.424 10:53:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:36:00.424 10:53:40 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:36:00.424 10:53:40 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:36:00.424 10:53:40 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:36:00.424 10:53:40 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:36:00.424 10:53:40 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:36:00.424 10:53:40 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:36:00.682 10:53:41 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:36:00.682 10:53:41 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:36:00.682 10:53:41 keyring_linux -- keyring/linux.sh@23 -- # return 00:36:00.682 10:53:41 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:00.682 10:53:41 keyring_linux -- common/autotest_common.sh@652 -- # local es=0 00:36:00.682 10:53:41 keyring_linux -- common/autotest_common.sh@654 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:00.682 10:53:41 keyring_linux -- common/autotest_common.sh@640 -- # local arg=bperf_cmd 00:36:00.682 10:53:41 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:00.682 10:53:41 keyring_linux -- common/autotest_common.sh@644 -- # type -t bperf_cmd 00:36:00.682 10:53:41 keyring_linux -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:36:00.682 10:53:41 keyring_linux -- common/autotest_common.sh@655 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:00.682 10:53:41 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:36:00.682 [2024-11-20 10:53:41.344937] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 421:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:36:00.682 [2024-11-20 10:53:41.345794] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6aaf60 (107): Transport endpoint is not connected 00:36:00.682 [2024-11-20 10:53:41.346789] nvme_tcp.c:2085:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x6aaf60 (9): Bad file descriptor 00:36:00.682 [2024-11-20 10:53:41.347790] nvme_ctrlr.c:4206:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] Ctrlr is in error state 00:36:00.682 [2024-11-20 10:53:41.347800] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:36:00.682 [2024-11-20 10:53:41.347807] nvme.c: 884:nvme_dummy_attach_fail_cb: *ERROR*: Failed to attach nvme ctrlr: trtype=TCP adrfam=IPv4 traddr=127.0.0.1 trsvcid=4420 subnqn=nqn.2016-06.io.spdk:cnode0, Operation not permitted 00:36:00.682 [2024-11-20 10:53:41.347816] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0, 0] in failed state. 00:36:00.682 request: 00:36:00.682 { 00:36:00.683 "name": "nvme0", 00:36:00.683 "trtype": "tcp", 00:36:00.683 "traddr": "127.0.0.1", 00:36:00.683 "adrfam": "ipv4", 00:36:00.683 "trsvcid": "4420", 00:36:00.683 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:36:00.683 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:36:00.683 "prchk_reftag": false, 00:36:00.683 "prchk_guard": false, 00:36:00.683 "hdgst": false, 00:36:00.683 "ddgst": false, 00:36:00.683 "psk": ":spdk-test:key1", 00:36:00.683 "allow_unrecognized_csi": false, 00:36:00.683 "method": "bdev_nvme_attach_controller", 00:36:00.683 "req_id": 1 00:36:00.683 } 00:36:00.683 Got JSON-RPC error response 00:36:00.683 response: 00:36:00.683 { 00:36:00.683 "code": -5, 00:36:00.683 "message": "Input/output error" 00:36:00.683 } 00:36:00.683 10:53:41 keyring_linux -- common/autotest_common.sh@655 -- # es=1 00:36:00.683 10:53:41 keyring_linux -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:36:00.683 10:53:41 keyring_linux -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:36:00.683 10:53:41 keyring_linux -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:36:00.683 10:53:41 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:36:00.683 10:53:41 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:00.683 10:53:41 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:36:00.683 10:53:41 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:36:00.683 10:53:41 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:36:00.683 10:53:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:36:00.683 10:53:41 keyring_linux -- keyring/linux.sh@33 -- # sn=46786457 00:36:00.683 10:53:41 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 46786457 00:36:00.683 1 links removed 00:36:00.683 10:53:41 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:36:00.683 10:53:41 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:36:00.683 10:53:41 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:36:00.683 10:53:41 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:36:00.683 10:53:41 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:36:00.683 10:53:41 keyring_linux -- keyring/linux.sh@33 -- # sn=262365739 00:36:00.683 10:53:41 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 262365739 00:36:00.683 1 links removed 00:36:00.683 10:53:41 keyring_linux -- keyring/linux.sh@41 -- # killprocess 3507735 00:36:00.683 10:53:41 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3507735 ']' 00:36:00.683 10:53:41 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3507735 00:36:00.683 10:53:41 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:00.683 10:53:41 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:00.683 10:53:41 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3507735 00:36:00.942 10:53:41 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:36:00.942 10:53:41 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:36:00.942 10:53:41 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3507735' 00:36:00.942 killing process with pid 3507735 00:36:00.942 10:53:41 keyring_linux -- common/autotest_common.sh@973 -- # kill 3507735 00:36:00.942 Received shutdown signal, test time was about 1.000000 seconds 00:36:00.942 00:36:00.942 Latency(us) 00:36:00.942 [2024-11-20T09:53:41.673Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:36:00.942 [2024-11-20T09:53:41.673Z] =================================================================================================================== 00:36:00.942 [2024-11-20T09:53:41.673Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:36:00.942 10:53:41 keyring_linux -- common/autotest_common.sh@978 -- # wait 3507735 00:36:00.942 10:53:41 keyring_linux -- keyring/linux.sh@42 -- # killprocess 3507722 00:36:00.942 10:53:41 keyring_linux -- common/autotest_common.sh@954 -- # '[' -z 3507722 ']' 00:36:00.942 10:53:41 keyring_linux -- common/autotest_common.sh@958 -- # kill -0 3507722 00:36:00.942 10:53:41 keyring_linux -- common/autotest_common.sh@959 -- # uname 00:36:00.942 10:53:41 keyring_linux -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:36:00.942 10:53:41 keyring_linux -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 3507722 00:36:00.942 10:53:41 keyring_linux -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:36:00.942 10:53:41 keyring_linux -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:36:00.942 10:53:41 keyring_linux -- common/autotest_common.sh@972 -- # echo 'killing process with pid 3507722' 00:36:00.942 killing process with pid 3507722 00:36:00.942 10:53:41 keyring_linux -- common/autotest_common.sh@973 -- # kill 3507722 00:36:00.942 10:53:41 keyring_linux -- common/autotest_common.sh@978 -- # wait 3507722 00:36:01.507 00:36:01.507 real 0m4.324s 00:36:01.507 user 0m8.148s 00:36:01.507 sys 0m1.448s 00:36:01.507 10:53:41 keyring_linux -- common/autotest_common.sh@1130 -- # xtrace_disable 00:36:01.507 10:53:41 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:36:01.507 ************************************ 00:36:01.507 END TEST keyring_linux 00:36:01.507 ************************************ 00:36:01.507 10:53:41 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:36:01.507 10:53:41 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:36:01.507 10:53:41 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:36:01.507 10:53:41 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:36:01.507 10:53:41 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:36:01.507 10:53:41 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:36:01.507 10:53:41 -- spdk/autotest.sh@342 -- # '[' 0 -eq 1 ']' 00:36:01.507 10:53:41 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:36:01.507 10:53:41 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:36:01.507 10:53:41 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:36:01.507 10:53:41 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:36:01.507 10:53:41 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:36:01.507 10:53:41 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:36:01.507 10:53:41 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:36:01.507 10:53:41 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:36:01.507 10:53:41 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:36:01.507 10:53:41 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:36:01.507 10:53:41 -- common/autotest_common.sh@726 -- # xtrace_disable 00:36:01.507 10:53:41 -- common/autotest_common.sh@10 -- # set +x 00:36:01.507 10:53:41 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:36:01.507 10:53:41 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:36:01.507 10:53:41 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:36:01.507 10:53:41 -- common/autotest_common.sh@10 -- # set +x 00:36:06.778 INFO: APP EXITING 00:36:06.778 INFO: killing all VMs 00:36:06.778 INFO: killing vhost app 00:36:06.778 INFO: EXIT DONE 00:36:09.312 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:36:09.312 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:36:09.312 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:36:09.312 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:36:09.312 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:36:09.312 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:36:09.312 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:36:09.312 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:36:09.312 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:36:09.312 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:36:09.312 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:36:09.312 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:36:09.312 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:36:09.312 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:36:09.312 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:36:09.312 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:36:09.312 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:36:12.617 Cleaning 00:36:12.617 Removing: /var/run/dpdk/spdk0/config 00:36:12.617 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:36:12.617 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:36:12.617 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:36:12.617 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:36:12.617 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:36:12.617 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:36:12.617 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:36:12.617 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:36:12.617 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:36:12.617 Removing: /var/run/dpdk/spdk0/hugepage_info 00:36:12.617 Removing: /var/run/dpdk/spdk1/config 00:36:12.617 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:36:12.617 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:36:12.617 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:36:12.617 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:36:12.617 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:36:12.617 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:36:12.617 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:36:12.617 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:36:12.617 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:36:12.617 Removing: /var/run/dpdk/spdk1/hugepage_info 00:36:12.617 Removing: /var/run/dpdk/spdk2/config 00:36:12.617 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:36:12.617 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:36:12.617 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:36:12.617 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:36:12.617 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:36:12.617 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:36:12.617 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:36:12.617 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:36:12.617 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:36:12.617 Removing: /var/run/dpdk/spdk2/hugepage_info 00:36:12.617 Removing: /var/run/dpdk/spdk3/config 00:36:12.617 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:36:12.617 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:36:12.617 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:36:12.617 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:36:12.617 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:36:12.617 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:36:12.617 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:36:12.617 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:36:12.617 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:36:12.617 Removing: /var/run/dpdk/spdk3/hugepage_info 00:36:12.617 Removing: /var/run/dpdk/spdk4/config 00:36:12.617 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:36:12.617 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:36:12.617 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:36:12.617 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:36:12.617 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:36:12.617 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:36:12.617 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:36:12.617 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:36:12.617 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:36:12.617 Removing: /var/run/dpdk/spdk4/hugepage_info 00:36:12.617 Removing: /dev/shm/bdev_svc_trace.1 00:36:12.617 Removing: /dev/shm/nvmf_trace.0 00:36:12.617 Removing: /dev/shm/spdk_tgt_trace.pid3031120 00:36:12.617 Removing: /var/run/dpdk/spdk0 00:36:12.617 Removing: /var/run/dpdk/spdk1 00:36:12.617 Removing: /var/run/dpdk/spdk2 00:36:12.617 Removing: /var/run/dpdk/spdk3 00:36:12.617 Removing: /var/run/dpdk/spdk4 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3028755 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3029818 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3031120 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3031630 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3032553 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3032733 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3033704 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3033790 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3034070 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3035808 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3037290 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3037599 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3037889 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3038201 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3038491 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3038741 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3038914 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3039229 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3040023 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3043028 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3043287 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3043541 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3043552 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3044038 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3044052 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3044543 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3044559 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3044943 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3045045 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3045294 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3045308 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3045849 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3046025 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3046355 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3050151 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3054451 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3065309 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3066002 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3070510 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3070782 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3075067 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3080970 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3083618 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3094051 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3112435 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3116245 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3118076 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3118998 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3124007 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3170334 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3175737 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3181577 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3188102 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3188187 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3189016 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3189931 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3190845 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3191311 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3191422 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3191737 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3191782 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3191785 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3192696 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3193608 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3194525 00:36:12.617 Removing: /var/run/dpdk/spdk_pid3194997 00:36:12.618 Removing: /var/run/dpdk/spdk_pid3195000 00:36:12.618 Removing: /var/run/dpdk/spdk_pid3195268 00:36:12.618 Removing: /var/run/dpdk/spdk_pid3196465 00:36:12.618 Removing: /var/run/dpdk/spdk_pid3197456 00:36:12.618 Removing: /var/run/dpdk/spdk_pid3206189 00:36:12.618 Removing: /var/run/dpdk/spdk_pid3234727 00:36:12.618 Removing: /var/run/dpdk/spdk_pid3239767 00:36:12.618 Removing: /var/run/dpdk/spdk_pid3241373 00:36:12.618 Removing: /var/run/dpdk/spdk_pid3243207 00:36:12.618 Removing: /var/run/dpdk/spdk_pid3243438 00:36:12.618 Removing: /var/run/dpdk/spdk_pid3243464 00:36:12.618 Removing: /var/run/dpdk/spdk_pid3243688 00:36:12.618 Removing: /var/run/dpdk/spdk_pid3244193 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3246036 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3246800 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3247289 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3249469 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3249890 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3250614 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3254912 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3260326 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3260327 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3260328 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3264319 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3272727 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3276806 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3283364 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3284874 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3286215 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3287751 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3292292 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3296868 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3304353 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3304461 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3308982 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3309216 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3309445 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3309897 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3309910 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3314641 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3315205 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3319572 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3322301 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3327779 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3338586 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3338588 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3357351 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3357590 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3363690 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3363905 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3369171 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3369862 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3370333 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3370812 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3371550 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3372042 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3372713 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3373191 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3377580 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3383330 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3389141 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3393379 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3397646 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3407399 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3408078 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3412365 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3412607 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3416871 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3422535 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3425410 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3435847 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3452520 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3456302 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3458078 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3458866 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3463773 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3466476 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3474965 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3474976 00:36:12.877 Removing: /var/run/dpdk/spdk_pid3480249 00:36:13.135 Removing: /var/run/dpdk/spdk_pid3482213 00:36:13.135 Removing: /var/run/dpdk/spdk_pid3484169 00:36:13.135 Removing: /var/run/dpdk/spdk_pid3485229 00:36:13.136 Removing: /var/run/dpdk/spdk_pid3487196 00:36:13.136 Removing: /var/run/dpdk/spdk_pid3488300 00:36:13.136 Removing: /var/run/dpdk/spdk_pid3497249 00:36:13.136 Removing: /var/run/dpdk/spdk_pid3497714 00:36:13.136 Removing: /var/run/dpdk/spdk_pid3498369 00:36:13.136 Removing: /var/run/dpdk/spdk_pid3500653 00:36:13.136 Removing: /var/run/dpdk/spdk_pid3501133 00:36:13.136 Removing: /var/run/dpdk/spdk_pid3501692 00:36:13.136 Removing: /var/run/dpdk/spdk_pid3505633 00:36:13.136 Removing: /var/run/dpdk/spdk_pid3505640 00:36:13.136 Removing: /var/run/dpdk/spdk_pid3507172 00:36:13.136 Removing: /var/run/dpdk/spdk_pid3507722 00:36:13.136 Removing: /var/run/dpdk/spdk_pid3507735 00:36:13.136 Clean 00:36:13.136 10:53:53 -- common/autotest_common.sh@1453 -- # return 0 00:36:13.136 10:53:53 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:36:13.136 10:53:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:13.136 10:53:53 -- common/autotest_common.sh@10 -- # set +x 00:36:13.136 10:53:53 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:36:13.136 10:53:53 -- common/autotest_common.sh@732 -- # xtrace_disable 00:36:13.136 10:53:53 -- common/autotest_common.sh@10 -- # set +x 00:36:13.136 10:53:53 -- spdk/autotest.sh@392 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:13.136 10:53:53 -- spdk/autotest.sh@394 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:36:13.136 10:53:53 -- spdk/autotest.sh@394 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:36:13.136 10:53:53 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:36:13.136 10:53:53 -- spdk/autotest.sh@398 -- # hostname 00:36:13.136 10:53:53 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-06 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:36:13.393 geninfo: WARNING: invalid characters removed from testname! 00:36:35.339 10:54:14 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:36.767 10:54:17 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:38.670 10:54:19 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:40.574 10:54:21 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:42.479 10:54:22 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:44.387 10:54:24 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:36:46.293 10:54:26 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:46.293 10:54:26 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:46.293 10:54:26 -- common/autotest_common.sh@738 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt ]] 00:36:46.293 10:54:26 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:46.293 10:54:26 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:46.293 10:54:26 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:36:46.293 + [[ -n 2951489 ]] 00:36:46.293 + sudo kill 2951489 00:36:46.303 [Pipeline] } 00:36:46.318 [Pipeline] // stage 00:36:46.323 [Pipeline] } 00:36:46.337 [Pipeline] // timeout 00:36:46.342 [Pipeline] } 00:36:46.357 [Pipeline] // catchError 00:36:46.362 [Pipeline] } 00:36:46.374 [Pipeline] // wrap 00:36:46.379 [Pipeline] } 00:36:46.391 [Pipeline] // catchError 00:36:46.400 [Pipeline] stage 00:36:46.402 [Pipeline] { (Epilogue) 00:36:46.415 [Pipeline] catchError 00:36:46.417 [Pipeline] { 00:36:46.430 [Pipeline] echo 00:36:46.432 Cleanup processes 00:36:46.438 [Pipeline] sh 00:36:46.721 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:46.721 3518926 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:46.734 [Pipeline] sh 00:36:47.015 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:36:47.015 ++ grep -v 'sudo pgrep' 00:36:47.015 ++ awk '{print $1}' 00:36:47.015 + sudo kill -9 00:36:47.015 + true 00:36:47.023 [Pipeline] sh 00:36:47.301 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:59.516 [Pipeline] sh 00:36:59.800 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:59.800 Artifacts sizes are good 00:36:59.814 [Pipeline] archiveArtifacts 00:36:59.822 Archiving artifacts 00:36:59.962 [Pipeline] sh 00:37:00.247 + sudo chown -R sys_sgci: /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:37:00.263 [Pipeline] cleanWs 00:37:00.273 [WS-CLEANUP] Deleting project workspace... 00:37:00.273 [WS-CLEANUP] Deferred wipeout is used... 00:37:00.280 [WS-CLEANUP] done 00:37:00.282 [Pipeline] } 00:37:00.299 [Pipeline] // catchError 00:37:00.311 [Pipeline] sh 00:37:00.615 + logger -p user.info -t JENKINS-CI 00:37:00.636 [Pipeline] } 00:37:00.651 [Pipeline] // stage 00:37:00.657 [Pipeline] } 00:37:00.673 [Pipeline] // node 00:37:00.678 [Pipeline] End of Pipeline 00:37:00.727 Finished: SUCCESS